With how much artificial intelligence has been improving, in many areas such as text reading/generation, picture reading, picture generation, convincing voice synthesis and more, I think there's a lot that can be discussed, about the effects that this technology will have on society.
I'll start off with one example.
I'd been thinking about the enshittification cycle of tech, and I think it's coming for Google hard. The search engine just isn't so great at finding what you actually want, and I think that's gonna leave a big opening for Bing with their use of AI. If the AI can sift through the crap and actually find what you want for real, due to its understanding of language, it'll actually make searching super useful again.
In the pre-Google internet, search engines used to search only for exact words and phrases, which had its uses, but also meant finding a lot of sites that simply crammed in a lot of popular words and phrases to get visitors. Google cut through the crap with a better understanding of how to "rank" sites relative to how relevant they are, and even find sites that are on the topic you were looking for without using the same exact words.
But Google started to become more advertiser-friendly, then later, more shareholder-friendly. There's a limit to how much one can make their product built entirely around shareholder growth, so as it turns to crap, it leaves an opening for a competitor to show up.
Since Bing/ChatGPT (which Bing is plugged into now) understands the use of language, it can actually understand context and determine relevance based on that. And that'll make it huge, I think. Context-based understanding of web pages can potentially do an excellent job of finding what people actually want, in a way that goes way beyond Google's page ranking systems, or the examination of exact words.
Edited by BonsaiForest on Dec 10th 2023 at 6:15:29 AM
And, being a commonality, it can be controlled for.
Otherwise coders wouldn't be using it, which they seem to be doing with some regularity, so I assume it's less of an issue that current low-grade consumer models have output size limits than you'd think.
Edited by Florien on Dec 15th 2023 at 8:17:26 AM
Maybe it would help here to have example of process, the unedited and edited version? There is lot of what ifs here about being hard to imagine what he process is like.
Regardless though of whether tool is good for this or not, you should always get feedback on your draft before you write the real version.
Someone made an Artificial Intelligence designed to waste phone scammer's time, and it's amazing.
Ha, ha! Genius!
I'm neutral about the meme, but she was beautiful.I've become someone who's a fan of the AI generator Yodayo: Mostly for being able to generate Interactive fiction in its Tavern section based on Original characters made by Yodayo members or based on pre existing characters.
Watch SymphogearSo google has finally got around to pairing its multi-modal AI with a robotics chassis and the result is well... one of the first robots that is capable of learning a task just from being taught it by a human, and then replicating it.... the whole thing is made with cheap off the shelf components to make the inevitable market as broad as possible.
When people said the other AI's were stepping stones, this is why.
Edited by Imca on Jan 7th 2024 at 2:30:20 AM
Hm, wasn't what I was thinking of when you said 'the robot was taught it'. I would've thought that's more observational or goal-focused, from the abstract it sounds a lot more like 'can we have it mostly copy the inputs from being remote controlled?' Which is still impressive, but it's a lot less generalising than I thought (also the video is of the remote control, not it doing things autonomously).
Avatar SourceLearning from observation is something that google's AI systems have done, so I wouldn't be surprised to see that in a couple generations, I know Gato could... and I think Gemini can... but I cant really deride the base idea of learning through teleoperation, if any thing it does actually seem like a better way to teach something....
Honestly its not even so much the specific robot here that interests me, as much as what it represents, that that path of development is finally being considered for pruning rather then just an unreachable ambition.
Edited by Imca on Jan 7th 2024 at 4:37:48 AM
So none of that was scripted?
The video was remote controlled, going by the description. i.e. the learning phase.
If the abstract is correct, the machine then pulled off the same tasks on its own. Makes me wonder how many shrimp were wasted to facilitate this.
Avatar SourceHmm, so we didn't just see AI independently completing all those tasks.
Yep. Once again, a wicked cool AI demonstration turns out to have not had any AI present in it. Just a human doing a thing and then someone going, "See that? That's what AI is going to do some day! Probably! ...give me money now!"
It's the tech industry equivalent of video game trailers consisting of nothing but pre-rendered cutscenes.
Edited by TobiasDrake on Jan 8th 2024 at 9:10:51 AM
My Tumblr. Currently liveblogging Haruhi Suzumiya and revisiting Danganronpa V3.TBH, I'm a bit confused why they didn't use AI footage, because they must have some if they're not just outright lying in the abstract. I guess it must look less smooth or something?
Avatar SourceThe autonomous footage is on the github
Seems its actualy about the same just substantialy slower, I am guessing they didnt want to speed up the video for YouTube.
Additionally the code is there because its an open source project, I dont exactly have that kind of robotics hardware lying about but that feels like the kind of thing you wouldnt put up if faked.
Edited by Imca on Jan 9th 2024 at 2:37:25 AM
...can I be honest, I think I'm missing where the revolutionary element comes from. I thought we could already do this? We've been able to feed inputs through an algorithm and have the algorithm be able to replicate it for ages. Like, I pretty distinctly remember this being a feature in some versions of Lego Mindstorms where you could program a little robot to do a thing by controlling it first.
Telling a computer to copy your input and save the pattern is...not even remotely new.
Edited by Zendervai on Jan 8th 2024 at 12:37:54 PM
Not Three Laws compliant.Those are perfect mirrors not learning the task, which is the diffrence.
It's not a specific chair being taught to be pushed in, it's the task of pushing in chars as a concept.
No, I'm still not seeing it. If it learned it by watching a robot doing it, I could buy it, but it's still "learning" by getting the exact inputs given to it and then copying it, not by trying to figure out how to generate the output needed.
It's cool, I guess? But yeah, at this point, it just looks like a worse version of a thing we could already do.
Not Three Laws compliant.I think the point is to create a general teleoperation software for complex manipulation tasks, then show how by using the remote control as training data you can create a general ability to solve the problem in question, more than just using standard training sets would.
On a technical level, this means that you don't need to exactly program the machine to do something complicated, or have it copy exactly, and both of those will have high failure rates if something is slightly different or off. It means just doing the task remotely a few times and then learning the general solution from that. Which is still neat, it's a bunch of generalisations.
Avatar SourcePretty much that, to use the shrimp cooking example.
If you take existing copying systems and get then to cook shrimp in one kitchen, they copy that exactly.
If you rearrange the furniture and put the stove some where else it quits working.
You actualy need to learn the specific task to have it be transferable, which is where this is diffrent.
Edited by Imca on Jan 9th 2024 at 2:50:47 AM
Such an AI would also be much easiyer to train, you can do the training multiple times but without having to replicate the exact conditions from the previous training sessions.
“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ CyranAI discovers substance that could reduce lithium use.
Potentially promising, and it's nice to hear good news about AI amidst all the doom and gloom.
We all die. The goal isn't to live forever. The goal is to create something that will.I'm honestly on the fence about using Chat GPT, mostly because I oppose AI in general. Sure I like how much it can generate, but the text is rather awjward to read, and doesn't always get what I want.
I did try it a few days ago, but then I started making really dumb prompts because part of me wanted to even though the rest of me didn't, and I deleted my account so I wouldn't be tempted any more of it. And yet I still want to use another, just more sparingly this time.
Edited by generation81 on Jan 18th 2024 at 8:52:20 AM
"I oppose AI in general" is somewhere between hideously vague, and opposing the existence of hinges.
Avatar SourceCould you explain what specific things you dislike about AI, what makes you oppose it? I'm interested in your reasons.
I see AI as like any other tool that dramatically changes people's lives - capable of both good and evil, and likely to be used for both.
Is it? "Language models don't have the memory to have sufficient context for maintaining consistency on their own or seeing the big picture" is a pretty big commonality.
Avatar Source