Like, I know the megacorps that control our lives do (since it’s a cheap way of adding value to their products), but what about actual users? I think many see it as a novelty and a toy rather than a productivity tool. Especially when public awareness of “hallucinations” and the plight faced by artists rises.
Kinda feels like the whole “voice controlled assistants” bubble that happened a while ago. Sure they are relatively commonplace nowadays, but nowhere near as universal as people thought they would be.
I think it’s those stupid hard coded buttons on my remote that I accidentally press every so often then have to repeatedly try and back/exit out of the stupid thing it launched that I cannot remove/uninstall from my tv.
If you can figure out how to get the remote open, you’ll probably find that the buttons are all part of the same flexible rubbery insert (unless it’s 10+ years old). Put a little tape on the bottoms of the ones causing you problems. The insulation should keep them from working, and it’s 100% reversible if you ever do find a use for them.
If it’s one of the older, more expensive remotes with individual switches, then, yeah, pliers and superglue. 😅
And it just needs to load a hasty scribbled overloaded UI that takes forever to load with no content because you don’t have an account and/or are not connected to wifi.
Maybe I’m a pessimist but this is going to really resonate with the people who are “looking forward to AI” because they read headlines, but haven’t actually used any LLMs yet because nobody has told them how.
I want a voice controlled assistant that runs locally and is fully FOSS and I can just run on my bog standard linux PC, hardware minimum requirements nonwithstanding
All I want is a real life iteration of J.A.R.V.I.S. and several billion dollars so I can blurt out cool ideas and have them rendered and built in a couple hours.
Not a single soul wants this. They just want to use every foul trick to get you to use copilot (by accident even) just like they do with bing and their other garbage.
Current LLMs are manifestly different from Cortana (🤢) because they are actually somewhat intelligent. Microsoft’s copilot can do web search and perform basic tasks on the computer, and because of their exclusive contract with OpenAI they’re gonna have access to more advanced versions of GPT which will be able to do more high level control and automation on the desktop. It will 100% be useful for users to have this available, and I expect even Linux desktops will eventually add local LLM support (once consumer compute and the tech matures). It is not just glorified auto complete, it is actually fairly correlated with outputs of real human language cognition.
The main issue for me is that they get all the data you input and mine it for better models without your explicit consent. This isn’t an area where open source can catch up without significant capital in favor of it, so we have to hope Meta, Mistral and government funded projects give us what we need to have a competitor.
Sure, all that may be true but it doesn’t answer my original concern: Is this something that people want as a core feature of their OS? My comments weren’t that “oh, this is only as technically sophisticated as voice assistants”, it was more “voice assistants never really took off as much as people thought they would”. I may be cynical and grumpy, but to me it feels like these companies are failing to read the market.
I’m reminded of a presentation that I saw where they were showing off fancy AI technology. Basically, if you were in a call 1 to 1 call with someone and had to leave to answer the doorbell or something, the other person could keep speaking and an AI would summarise what they said when they got back.
It felt so out of touch with what people would actually want to do in that situation.
I suppose having worked with LLMs a whole bunch over the past year I have a better sense of what I meant by “automate high level tasks”.
I’m talking about an assistant where, let’s say you need to edit a podcast video to add graphics and cut out dead space or mistakes that you corrected in the recording. You could tell the assistant to do that and it would open the video in Adobe Premiere pro, do the necessary tasks, then ask you to review it to check if it made mistakes.
Or if you had an issue with a particular device, e.g. your display, the assistant would research the issue and perform the necessary steps to troubleshoot and fix the issue.
These are currently hypothetical scenarios, but current GPT4 can already perform some of these tasks, and specifically training it to be a desktop assistant and to do more agentic tasks will make this a reality in a few years.
It’s additionally already useful for reading and editing long documents and will only get better on this end. You can already use an LLM to query your documents and give you summaries or use them as instructions/research to aid in performing a task.
A year ago local LLM was just not there, but the stuff you can run now with 8gb vram is pretty amazing, if not quite as good yet as GPT 4. Honestly even if it stops right where it is, it’s still powerful enough to be a foundation for a more accessible and efficient way to interface with computers.
Do people actually want this?
Like, I know the megacorps that control our lives do (since it’s a cheap way of adding value to their products), but what about actual users? I think many see it as a novelty and a toy rather than a productivity tool. Especially when public awareness of “hallucinations” and the plight faced by artists rises.
Kinda feels like the whole “voice controlled assistants” bubble that happened a while ago. Sure they are relatively commonplace nowadays, but nowhere near as universal as people thought they would be.
Nope. Just like those stupid hard coded buttons on my Roku remote that I have never used.
I think it’s those stupid hard coded buttons on my remote that I accidentally press every so often then have to repeatedly try and back/exit out of the stupid thing it launched that I cannot remove/uninstall from my tv.
Super glue, or pliers and super glue.
If you can figure out how to get the remote open, you’ll probably find that the buttons are all part of the same flexible rubbery insert (unless it’s 10+ years old). Put a little tape on the bottoms of the ones causing you problems. The insulation should keep them from working, and it’s 100% reversible if you ever do find a use for them.
If it’s one of the older, more expensive remotes with individual switches, then, yeah, pliers and superglue. 😅
And it just needs to load a hasty scribbled overloaded UI that takes forever to load with no content because you don’t have an account and/or are not connected to wifi.
https://xdaforums.com/t/guide-remapping-android-tv-remote-buttons.4433617/
Absolutely not. But this is the new standard now.
The new Micro$oft standard, which, as always, is bullshit and should be avoided and ignored at all times.
Yes. The Microsoft standard. Like the Windows key on all keyboards nowadays.
Not on all of them
Maybe I’m a pessimist but this is going to really resonate with the people who are “looking forward to AI” because they read headlines, but haven’t actually used any LLMs yet because nobody has told them how.
I want a voice controlled assistant that runs locally and is fully FOSS and I can just run on my bog standard linux PC, hardware minimum requirements nonwithstanding
All I want is a real life iteration of J.A.R.V.I.S. and several billion dollars so I can blurt out cool ideas and have them rendered and built in a couple hours.
I’ll be good I promise.
Mycroft was the best bet for this before now being continued by open voice OS.
Not a single soul wants this. They just want to use every foul trick to get you to use copilot (by accident even) just like they do with bing and their other garbage.
Another key to bind to something else? Hell yeah
Nope, just a new logo on an existing key.
:(
Current LLMs are manifestly different from Cortana (🤢) because they are actually somewhat intelligent. Microsoft’s copilot can do web search and perform basic tasks on the computer, and because of their exclusive contract with OpenAI they’re gonna have access to more advanced versions of GPT which will be able to do more high level control and automation on the desktop. It will 100% be useful for users to have this available, and I expect even Linux desktops will eventually add local LLM support (once consumer compute and the tech matures). It is not just glorified auto complete, it is actually fairly correlated with outputs of real human language cognition.
The main issue for me is that they get all the data you input and mine it for better models without your explicit consent. This isn’t an area where open source can catch up without significant capital in favor of it, so we have to hope Meta, Mistral and government funded projects give us what we need to have a competitor.
Sure, all that may be true but it doesn’t answer my original concern: Is this something that people want as a core feature of their OS? My comments weren’t that “oh, this is only as technically sophisticated as voice assistants”, it was more “voice assistants never really took off as much as people thought they would”. I may be cynical and grumpy, but to me it feels like these companies are failing to read the market.
I’m reminded of a presentation that I saw where they were showing off fancy AI technology. Basically, if you were in a call 1 to 1 call with someone and had to leave to answer the doorbell or something, the other person could keep speaking and an AI would summarise what they said when they got back.
It felt so out of touch with what people would actually want to do in that situation.
I hope the LLM bubble pops this year. The degree of overinvestment by megacorps is staggering.
I suppose having worked with LLMs a whole bunch over the past year I have a better sense of what I meant by “automate high level tasks”.
I’m talking about an assistant where, let’s say you need to edit a podcast video to add graphics and cut out dead space or mistakes that you corrected in the recording. You could tell the assistant to do that and it would open the video in Adobe Premiere pro, do the necessary tasks, then ask you to review it to check if it made mistakes.
Or if you had an issue with a particular device, e.g. your display, the assistant would research the issue and perform the necessary steps to troubleshoot and fix the issue.
These are currently hypothetical scenarios, but current GPT4 can already perform some of these tasks, and specifically training it to be a desktop assistant and to do more agentic tasks will make this a reality in a few years.
It’s additionally already useful for reading and editing long documents and will only get better on this end. You can already use an LLM to query your documents and give you summaries or use them as instructions/research to aid in performing a task.
I guess my understanding of an LLM must be way off base.
I had thought that asking an LLM to edit a video was simply out of scope. Like asking your self driving car to wash the dishes.
A year ago local LLM was just not there, but the stuff you can run now with 8gb vram is pretty amazing, if not quite as good yet as GPT 4. Honestly even if it stops right where it is, it’s still powerful enough to be a foundation for a more accessible and efficient way to interface with computers.