The exact quote:
It is important to us, and we’ve tried to be really clear, we are not doing the yearly cadence. We’re not going to do a bump every year. There’s no reason to do that. And, honestly, from our perspective, that’s kind of not really fair to your customers to come out with something so soon that’s only incrementally better. So we really do want to wait for a generational leap in compute without sacrificing battery life before we ship the real second generation of Steam Deck. But it is something that we’re excited about and we’re working on.
I think you need to take a step back and ask if ARM makes sense if you’re translating x86 instructions 100% of the time. Unless you’re hoping people will develop new games for ARM and you won’t use your SD to play existing titles much, but that seems like a 180° shift to me.
Just to add as we are discussing mainly ARM vs x86 now… that is just a small part of the whole device. Just look at the SD OLED vs LCD. They managed to have OLED screen that is significantly better than the LCD one while using less power on AVG which is a huge deal to battery life and it either allows you to compensate with more power to SOC to achieve better performance at the same battery life or take the saving and go with higher battery life… and that’s just screen.
Then they optimized the PCB layout, PCB components, etc… to get both better cooling and efficiency.
I think that what is currently holding them back is both the SOC available and the actual efficiency of given parts combined. Getting improvement in both areas at once will lead to a significant change but one or the other alone will not tip the scales towards significant upgrade.
I honestly don’t know. No one does. Valve is working on it. We have no idea of the current state of their progress. Likely rudimentary. I was commenting purely on the efficiency.
I dunno, I think you may be underestimating ARM here. I’ve heard that the overhead from translating the machine code is a lot lower than you might think, because so much X86 code is optimized down to a RISC-like subset of the instruction set already. And if that overhead isn’t too daunting in the common cases, the more robust power management on the ARM side of the chip market might be able to make up the difference in a handheld environment for most users. Obviously it’s a huge amount of work to nail the software, and it would be on top of the work they were already doing on Linux, so I’m not saying it’ll definitely be in the next iteration, but I could definitely imagine it happening eventually.