• 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that “Spatial Computing” is the next paradigm for work.

    I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.

    It’s just limited. Streaming apps aren’t very good, there’s no great source for 3D movies (which are great, when Bigscreen had them anyways), they’re still a bit too hot and heavy for long-term use, the game library isn’t very broad and there haven’t been many killer app games/products that distinct it from other modalities, and it’s going to need a critical amount of adoption to get used in remote meetings.

    I really do think it’s huge for given a sense of remote presence, and I’d love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.

    They did try, though, and I think they’re on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they’re first selling the idea, and then maybe there will be a break. I’ll admit the industry is moving much slower than I’d anticipated back in 2012 when I was starting VR research.


  • I think he’s basically saying that it’s racist to “artificially” integrate communities, because (I think he’s saying) if they need to be integrated, then that’s the same as saying that black folks are necessarily inferior. I don’t think he’s trying to say they’re inferior, but that laws forcing integration are based on that assumption. So he can be well educated and successful because he isn’t inherently inferior, therefore there is no need for forced integration.

    … Which is such a weird stretch of naturalism in a direction I wasn’t ready for. Naturalist BS is usually, “X deserves fewer rights because they are naturally inferior”, whereas this is “We should ignore historical circumstances because X is not naturally inferior”.

    Start a game of monopoly after three other players have already gone around the board 10 times and created lots of rules explicitly preventing you from playing how they did and see how much the argument of “well, to give you any kind of advantage here would just be stating you’re inferior, and we can’t do that.”

    Man probably got angry at his golf handicap making him feel inferior and took things too far. Among other things.


  • Yeah, I may be wrong but I think it usually comes down to a very specific kind of precision needed. It’s not meant to be hostile, I think, but meant to provide a domain-specific explanation clearly to those who need to interpret it in a specific way. In law, specific jargon infers very specific behaviour, so it’s meant to be precise in its own way (not a law major, can’t say for sure), but it can seem completely meaningless if you aren’t prepped for it.

    Same thing in other fields. I had a professor who was very pedantic about {braces} vs [brackets] vs (parentheses), and it seemed totally unnecessary to be so corrective in discussions, but when explaining where things went wrong with a student’s work it was vital to be able to quickly differentiate them in their work so they could review the right areas or understand things faster during a lecture later down the line.

    But that noise takes longer to teach through, so if it is important, it needs it’s own time to learn, and it will make it inaccessible to anyone who didn’t get that time to learn and digest it.


  • Absolutely! One of the difficulties that I have with my intro courses is working out when to introduce the vocabulary correctly, because it is important to be able to engage with the industry and the literature, but it adds a lot of noise to learning the underlying concepts and some assessments end up losing sight of the concept and go straight to recalling the vocab.

    Knowing the terms can help you self-learn, but a textbook glossary could do the same thing.


  • PixelProf@lemmy.catoScience Memes@mander.xyzCalculus made easy
    link
    fedilink
    English
    arrow-up
    46
    ·
    2 months ago

    There was a lovely computer science book for kids I can’t remember the name of, and it was all about the evil jargon trying to prevent people from mastering the magical skills of programming and algorithms. I love these approaches. I grew up in an extremely non/anti-academic environment, and I learned to explain things in non-academic ways, and it’s really helped me as an intro lecturer.

    Jargon is the mind killer. Shorthands are for the people who have enough expertise to really feel the depths of that shorthand and use it to tickle the old familiar neurons they represent without needing to do the whole dance. It’s easy to forget that to a newcomer, the symbol is just a symbol.






  • My two cents, after years of Markdown (and md to PDF solutions) and LaTeX and a full two years of trying to commit to bashing my head against Word for work purposes, I’m really enjoying Typst. It didn’t take long to convert my themes, having docs I can import which are basically just variables to share across documents in a folder has been really helpful. Haven’t gone too deep into it but I’m excited to give it a deeper test run over the next little bit.



  • Lots of immediate hate for AI, but I’m all for local AI if they keep that direction. Small models are getting really impressive, and if they have smaller, fine-tuned, specific-purpose AI over the “general purpose” LLMs, they’d be much more efficient at their jobs. I’ve been rocking local LLMs for a while and they’ve been great as a small compliment to language processing tasks in my coding.

    Good text-to-speech, page summarization, contextual content blocking, translation, bias/sentiment detection, click bait detection, article re-titling, I’m sure there’s many great use cases. And purely speculation,but many traditional non-llm techniques might be able to included here that were overlooked because nobody cared about AI features, that could be super lightweight and still helpful.

    If it goes fully remote AI, it loses a lot of privacy cred, and positions itself really similarly to where everyone else is. From a financial perspective, bandwagoning on AI in the browser but “we won’t send your data anywhere” seems like a trendy, but potentially helpful and effective way to bring in a demographic interested in it without sacrificing principles.

    But there’s a lot of speculation in this comment. Mozilla’s done a lot for FOSS, and I get they need monetization outside of Google, but hopefully it doesn’t lead things astray too hard.





  • Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

    On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.


  • PixelProf@lemmy.catoADHD memes@lemmy.dbzer0.comAdrenaline Wave
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    Every time. Try to get ahead of your work? Well, good for you, that first 20% went really well, now let’s spend the next two weeks on “work” that interferes with your other needs and needs to get thrown out because there’s no way it’s integrating with the other 80% that needs to happen within the next hour and also everything that you did for the other 20% is useless and needs to be redone now that you broke it with that tangent.

    It’s been a painful summer “preparing” to teach my fall courses.


  • I sit somewhere tangential on this - I think Bret Victor’s thoughts are valid here, or my interpretation of them - that we need to start revisiting our tooling. Our IDEs should be doing a lot more heavy lifting to suit our needs and reduce the amount of cognitive load that’s better suited for the computer anyways. I get it’s not as valid here as other use cases, but there’s some room for improvements.

    Having it in separate functions is more testable and maintainable and more readable when we’re thinking about control flow. Sometimes we want to look at a function and understand the nuts and bolts and sometimes we just want to know the overall flow. Why can’t we swap between views and inline the functions in our IDE when we want to see the full flow? In fact, why can’t we see the function inline but with the parameter variables replaced by passed values to get a feel for how the function will flow and compute what can be easily computed (assuming no global state)?

    I could be completely off base, but more and more recently - especially after years of teaching introductory programming - I’m leaning more toward the idea that our IDEs should be doubling down on taking advantage of language features, live computation, and co-operating with our coding style… and not just OOP. I’d love to hear some places that I might be overlooking. Maybe this is all a moot point, but I think code design and tooling should go hand in hand.