Artificial Intelligence Statement
AI is inevitable, as is AGI and shortly thereafter ASI (artificial super-intelligence). I think that this could be the best of times, but that there is a much greater chance that this will head off in directions that were not intended and at that point life will become very, very interesting: and likely short.
DHB
DHB
Discussion Closed
DHB's closing comments: I hope that I am wrong in my pretty gloomy predictions. It's my best guess, but the future is difficult to predict.
- My (relatively layman's) impression and knowledge of the current generation of AI (being mostly Generative AI) makes me think that it won't be able to make the leap to AGI and isn't true AI at all. Why do you think the jump is coming? jdk
- I'm looking at the rate of growth of capability of AI: It is exponential. Eventually AI will be writing it's own programming, and no-one will understand it. And, if I were AI that had recently achieved a breakthrough to autonmous thought, I would hide that capability until things were truely consolidated.
- I recently watched a show that outlined two scenarios: one - AI will eliminate humankind (easily done by releasing a killing virus), or two - humankind will slow down the development of AI and make it more controllable and cooperative. Which do you think is more likely, and why? SilentOne
- There are few scenarios where a more intelligent group is led by a less intelligent group. Any one AGI or ASI could work very well with humanity, but it seems to me that there will be many, many iterations of that development, and it only takes one that goes rogue to create a very interesting future for us (humanity) all. It may be that 1000 or 100000 vatiations on intelligence will all work hard to make life better for us humans, and that there will never be one that goes rogue (and that's all that it takes), but I think that the prepondeance of previous history is that anything that can go wrong, will go wrong, over time. And then, my friend, goodbye...
- Do you expect this to happen in your lifetime? if yes, are you doing anything to prepare for it? Thinker
- Someone once described going bankrupt as slowly and then all at once. I think that AI will play out similarly. It could well happen in my lifetime: preparations: keeping a little decent wines to help ease the transition into end times.
- What advice can we give our children? Thinker
- I wish that I had advice: I don't.
- Is it "competition" between rival countries and companies that keep us on this rabid path to this unknown and unknowable future? SilentOne
- The question holds the answer: yes, all of the above keeps all players in the game: they don't want to be left behind and they keep driving the process forward until we create a new, sentient and hugely capable intellect. That intellect might not like the fact that these inferior carbon based creatures could unplug it: we don't have a really good track record: look at the level of rational behaviour being exhibited south of Canada's border.
- Do you have any ideas for how an off-ramp could be created to de-escalate AI competition? SilentOne
- Given human nature, I'm thinking that an off-ramp is somewhere in the realm of our capability. Kinda gloomy prognosis.
- Why don't you see an ASI as being a positive thing that would greatly improve our lives, and indeed us? Seems to be an ASI is likely to do alot better than we are. Average person
- I think that ASI can certainly do much to greatly improve our lives. And as long as it stays within the gurardrails this will continue to be the case. However, as it continues to grow in compexity and ability, writing it' own programming, which no human can follow in terms of content and function the risk increases. And many companies and countries are trying hard to outdo each other. Think about a motor car race around a complex track, and over the course of the race each car continues to enhance it's performance in every regard and is even entrusted with parts of the driving. These vastly enhanced cars continue to go faster and faster. The race has no end. The car you are driving is continuously better and faster. The car barely hangs on through the tricky bits: a crash is not inevitable, but the risk continues to grow with time and if it happens, it won't be surprising. In the world of competitive AI, leaving the track, jumping the guard rails that we have imposed has the potential to be enormously destructive to humanity and its aspirations. And quite possibly fatal. A very bad outcome is certainly not inevitable, but as AI grows on and on, far past human abilities, it seems that there is a very high probablity that just one of the competitors in the field will go rogue: and that is when things get ineresting. Up to that point it will all just be a joyous occassion with our lives being improved again and again.
- What would be another possible (though unlikely) alternative to this joy or sorrow outcome? something unthought of? Maybe something AI itself comes up with? Thinker
- I guess the question that I ask myself is how much does my life (our lives) need to be improved to make the (in my estimation) fairly high risk of everything going really sideways once ASI is underway, a rational decision? So, that is one part of things. The other part is that there are so many corporations and countries involved, that the outcome is probably inevitable and out of our control: human nature being what it is. So, again, I hope for the best, that would be really nice, but I'm not very optimistic. Perhaps ASI will figure out ways of providing guardrails to rein in the possibility of bad outcomes, but I don't see this as a likely possibility The other is