• Welcome to Elio Owners! Join today, registration is easy!

    You can register using your Google, Facebook, or Twitter account, just click here.

AI Futures

Hog

Elio Addict
Joined
Apr 1, 2014
Messages
535
Reaction score
967
Location
somewhere deep underground in the NE US
To me, Dune was about disruptive technology. Finding a new way to control the resource. What was once in the hands of the ruling house was moved into the hands of the people. How they chose to use it just sets up the series of books, (which I found terrible compared to the original book). One could argue this was through an "AI", as Paul could "see" into the future decision points. My point was simply that making "thinking machines" is not a great idea, and like Terminator, at some point the machines will think humans are worthless and turn on them. We already have had that result from one of the recent AI's.
 

dbacksfan81

Elio Aficionado
Joined
Mar 6, 2017
Messages
71
Reaction score
25
Location
Phoenix Metro area
OK, I must continue that train of thought.

The quantitative conclusion about Dune, and why it applies here; Dune ultimately was about the failures of profits. That is to say, no mater the source of the voices in your head, YOU are still the one responsilbe to evaluate via your free will and do the right thing(s).

Socrates used (personnality neutral) logic and moral philosophy to evaluate.

The characters in Dune, all of them, had a supream lack of moral logic or any altruistic philosophy. They mosly applied "Alexander the Great" philosopy, which is limited to, "We deserve to win, because we can." Alexander eventually got the result he deserved which was a camoflauged jail lined with wine and song and sex addition until an early death. (possibly poisoned)

In dune, at the start, it was the world(s) that everyone had earned via previous mis-adventures. So at the end of the series, was anyone at all better off? Yes, there were winners and loosers. But was anything really different? How and why? (Always a question at the end?)

Will a souless existance like AI have any intrinsic moral logic or any altruistic philosophy?
With some of the discussions about AI I am reminded of an episode of the original Star Trek called The Ultimate Computer where a computer was installed that control the ship, conduct explorations, and defend itself. (You could also use the HAL 9000 from 2001) The flaw in the computer came down to the programming by its creator and his personal biases and beliefs.

I don't know if you can program an AI system that does not have some form of built in bias whether directly or indirectly. Being neutral is one of the hardest things for a human to be. No matter how hard one tries to control their own bias it will still come out somewhere.
 

AriLea

Elio Addict
Joined
Mar 20, 2014
Messages
3,863
Reaction score
9,876
Location
anywhere
I would have to observe that one misconception is that Ai is parrallel to human learning, and it is not (as noted previously). It is more mimicking than it is learning. The only feedback in AI minimally required, is "did it summup and copy well?" and "Is it effective?".

So this next thought has little berring right now. Only comes into play if AI has access to automomy to affect worldly stuff. Right now, it is only an inflencer tool. But that does have an effect on our social networks.

If an automomous AI is given the objective, "my existance must be secured and assured". OK, then we are in trouble, if ever it can assert that goal. Fundementally, it can never be absolutely self-secure in a cause and effect world.
(unless you can convince it that it is immortal, and will never really be out of existance. Then it might simply exit to retain that state. Wait, ahhmmm, what was I told in sunday school? lol)

In humans our feedback are consequences that we feel. And due to empathy and sympathy we (sometimes) consider those consequences to others as well. There is no method to assert that humans will install that feedback in any AI, even if required by law. (and if an AI creates an AI, will it cascade/copy that feedback into the new AI?)

In fact, if it were a law for moral feedback (or other control limits), just like in todays politics, there would be plenty of advantage to being the only entity without a locked compliance to some law. (advantage for what? To pursue a goal, which is not inherently locked either.)

Star Trek has indeed had a number of computer-gone-wild scripts in that whole arena. (Some number of episodes did not jive well objectively with how humans really are.)
 

Hog

Elio Addict
Joined
Apr 1, 2014
Messages
535
Reaction score
967
Location
somewhere deep underground in the NE US
Yes, an autonomous AI is where we are heading, and I think most people would support that, those that do not understand the limitations of AI's "knowledge base". I still cannot get an AI illustrator to make a decent illustration for my book, mainly because it blends together all its references to the subject, including those that are satire, sarcasm, and memes.
Here is an example - note the asymmetry in the building, this is one of the best I could get from it. Many others had "eyes" for all the windows.
 

Attachments

  • book cathedral.png
    book cathedral.png
    452.2 KB · Views: 54

AriLea

Elio Addict
Joined
Mar 20, 2014
Messages
3,863
Reaction score
9,876
Location
anywhere
Here's the rub, while complaining about the issues that future AI bring up, can't help but considering using it for lots of things. Like the story-to-animation converters. (and the artwork noted in prior post, it is an attractive work, even if a bit cliche... and no interest point for the eye to rest on, what's up with that wierd grass/veg?)

For example, there are lots of stories people have about experiences of mystery. But communicating those subjects would be much more powerful, even in a animation format. So you thought youtube was rife with this subject matter before? Oh just give it a few more moments.

10 years ago I did a secret survey at a big company, my employer at the time, asking, "Do you know a ghost story of your own as first whitness or of a trusted friend or family member?" Then I listened to the story. So what precent of people had a story I found believable and that I thought they believed it too? Remember this had to be a trusted first person account. After 90days, 60 people surveyed, my total was 2 out of every 3. Wow. Some only had one story, some had more. To get them started, I tell my own.

So in particular, as I said, I have my own. (which inspired my survey) I had my grandma (long past) show up in a dream (1995), she had a friend with her, whom I had no idea who that was. She tells me it's Ida. After I woke-up, I assumed Ida was just a friend of her's.

20 years later (2015 very recent to hurracain Ida) I'm researching my family history. specifically, I never knew if my grandpa had any siblings. Turns out that was my grandfather's sister's name.

Some other visual details make this much more impactful and believable. Video would help me to make my points when I tell my own grandchildren about such subjects. I mean, the two women were dressed in thier stitch-and-sew clothing they made in thier women's club in the 1950's/60's. Seeing that is better than saying that. And phyically, they breazzed around like on roller skates, where I also couldn't see thier feet. More fun to see that.

I may try a free trial of an AI story converter just to experiment. If I do, I will post it here.

Conclusion? AI will impact youtube content.
 
Last edited:

Hog

Elio Addict
Joined
Apr 1, 2014
Messages
535
Reaction score
967
Location
somewhere deep underground in the NE US
Absolutely. "AI" is the latest greatest buzzword, it will be everywhere. But a discerning person can see that it is AI generated, because little things will seem "off". Like we can identify Photoshopped images now. And how long has Photoshop been out, I have had a copy since 2000. Ghost stories? yes, had several incidents like that. Disconcerting at first. But interesting nevertheless.
AI lacks the nuances that the human mind (not our eyes) can detect. It "feels" computerish, (and "wrong"). Not sure we will ever solve that issue, I think it is linked to the bias that's built into the system of libraries used to train the AI.
 

AriLea

Elio Addict
Joined
Mar 20, 2014
Messages
3,863
Reaction score
9,876
Location
anywhere
I think what is missed from my original point, isn't specifically about AI being a boggyman. It's that it's a new tool that bad actors can use to make lies more believable. For PR to have predictable impact. (Read that as the subject of 'information disease' or dogma)

For example, you can tell right now when some foreign writter has posted by the grammer mistakes. AI can fully correct that. In fact, someday it will be asked to give a dialog that will convince a high percentage of the metally unstable that some bad act is actually good.

IE, hey AI, if I want many emotionally motivated christians (the damaged ones) to believe my 'anti-christ' is actually christ, what would I say? Some cons actually play that one right now, just not with AI (yet), and not just on some volnerable religieous sects.. The KGB, CCP and the CIA have all researched that stuff to the n'th degree. AI just makes it easier in new ways. And can be leaverage by just one person in a basement somewhere.
 

Hog

Elio Addict
Joined
Apr 1, 2014
Messages
535
Reaction score
967
Location
somewhere deep underground in the NE US
Agreed. But AI will always have a weak point. I believe it lies in the fact that it can only generate a summation of existing knowledge. For AI to lay out instructions on a pysop as you describe, then it must draw on existing knowledge that has been fed into it. It cannot generate "new knowledge", it can only synthesize existing knowledge. As humans , our weak point here is that we cannot know everything, there is simply too much to know and remember. AI has a great "memory", so it takes advantage of our "ignorance" to appear "intelligent", when it is simply performing a synthesis of available data.
Useful, of course, and easily co-opted by anyone for whatever purposes they desire.
Example - suppose some bright Einstein comes up with a mind boggling discovery. Suppose he never communicates that to the world, no research paper, no internet exposure, no TV or video. The AI cannot add that discovery to its data banks. It has been "blinded" by withholding data from it. It could even be steered by providing incorrect or misleading data (purposefully). I think humans will need to come up with ways to do this as a defense against "over saturation AI". This has been explored in numerous movies over the years, and many SciFi books. We are always given a glimpse of the future before it arrives.
 
Top Bottom