this post was submitted on 15 Feb 2024
409 points (94.4% liked)
Technology
59594 readers
3399 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So the quest pro? Foveated rendering only matters if you don’t have the graphics throughput to render it all, so I don’t totally buy that it’s key to a good vr headset so much as helps you get away with cheaper silicon. Maybe enough-lower tdp that it enables slimmer design.
I think foveated rendering also helps with immersion. Being able to blur things you are not specifically looking at and are farther away is a closer match to reality.
Reality doesn't downsample when you're not looking at it, your eye does that.
As far as I understand (and do correct me if I’ve got it wrong), your eyes still know they are looking at very small and very rapidly blinking lights in close proximity and in a flat array, which is why it mostly feels like uncanny valley in regards to that exact experience, and why software enhancement/approximation of the effect could be beneficial.
Delayed response but if you're talking about the general experience of VR being an uncanny valley experience then no, I don't agree. It's very common for people who use VR to say that they forgot for a moment that it wasn't real.
As far as you know. Maybe that's the reasoning behind weird stuff in quantum mechanics. The cat is both alive and dead until you open the box and look at it.
The whole point of the cat thing was to point out the absurdity of the claim that reality isn't real until you know about it. The cat is already in whatever state you observe when you open the box. It's not both alive and dead, it's either alive or dead. The thought experiment isn't serious, and it's not supporting the idea that the cat is somehow magically in both states just because you haven't yet manipulated the lid of a wooden cube.
When we talk about the cat being both alive and dead, it's a simplification to help visualize a quantum phenomenon where particles exist in multiple states simultaneously until measured or observed.
Schrodinger came up with the cat to represent the absurdity of quantum mechanics because he thought it was absurd - but that doesn't mean his metaphor isn't a useful one. Particles like electrons or photons can exist in a state of superposition, where they hold multiple potential states (e.g., spin up and spin down) at the same time. This isn't just a theoretical curiosity; it's been experimentally verified in numerous quantum experiments, such as the double-slit experiment.
The act of measurement in quantum mechanics forces a system to 'choose' a definite state from among its superposed states, a process known as wave function collapse. Before measurement, the system genuinely exists in all its possible states simultaneously, not in one state or the other. This is a fundamental aspect of the quantum world
I don't really look at it as a symptom of lack of graphics throughput, but more as a benefit of eye tracking, which is also potentially something that benefits, say, the immersion of others through portraying your facial expressions more realistically, or something to that effect. You could also use it as a kind of peripheral for games or software, and apple currently uses it as a mouse, so it's not totally useless. But I also can't imagine that most developers are going to be imaginative enough to make good use of it, if we can't even think of good uses for basic shit, like haptic feedback.
Perhaps it breaks even in terms of allowing them to save money they otherwise would've spent on rendering, but I dunno if that's the case, since the camera has to be pretty low latency, and you have to still dedicate hardware resources to the eye tracking and foveated rendering in order to get it to look good. Weight savings, then? I just don't really know. I guess we'll see, if it gets more industry adoption.