fbpx

AI Update

 

AI Update – November 2024: This gives an overview of some of the current constraints on this technology

 

I have written a number of articles on AI in relation to Women’s health, and so have decided to give some regular updates on the progress of this technology.

The material presented draws on the November – December 2024 issue of the Energy and Technology magazine published by the IET, of which I am a long-time member and Chartered Engineer, and a book by Daniel Kahneman, ‘Thinking, Fast and Slow’, which is a good read but very detailed, so you cannot just skim the material and understand it.

The main thrust in AI at the moment is in Generative AI involving for example, ChatGPT, utilising, Large Language Models (LLM’s), which have caused a revolution, in that we in the public can ask lots of questions and get very quick and detailed answers. In fact, we can use this feature as our personal assistant, to draft letters, or improve our own drafts; to help tutor us on a range of topics, as well as even help with writing software code. As a tutor we need to ensure we ask good questions, in order to get meaningful responses.

Which brings me to the question, has Generative AI hit a limit or will it hit a limit in the near future? As I see it, we are beginning to hit a number of constraints; firstly, energy consumption is huge and most people are unaware of this constraint; which is why Microsoft, Google and Amazon are investing in Modular Nuclear Reactors in order to power their respective data centres. In the UK data centres account for consumption of 4% of grid power and growing. Secondly, LLM’s require huge amounts of data, I mean really huge amounts of data, and the required amount of data might not be available. People are also becoming aware of the importance of their personal data and will start restricting access to it. Additionally, the limits of our knowledge are being reached. For example, Is Generative AI really intelligent in its own right or just a good Pattern recognition system, I do not think we have a clear answer to this question. A further concern is what happens if Generative AI starts training on AI generated data, not real data?

We have encountered AI hallucinating and this is a very active area of research, and appears to be caused when the correct data is not available to train the AI system, and the AI system provides an answer anyway.

Daniel Kahneman presented two models of the human brain; System 1 which is very fast, automatic and error prone; this is the subconscious mind at work. Ironically, this seems to match up with Generative AI, and hallucinating.

The other model is System 2, which is slow, effortful and reliable; this is the conscious mind at work. Which is where we want generative AI to end up, without being slow.

So, the inference from the above is that Generative AI may be more human like (System 1) than we thought or wanted.

The next few years of AI research should be very revealing, possibly requiring dramatic changes in AI architecture, and may include the incorporation of symbolic reasoning engines with LLM’s, as well as, new approaches to reduce energy consumption and the amount of data required for training. It is quite confounding that in a world facing huge issues with energy generation that the prevailing AI technology is an energy black hole.

In hot countries like Australia, do we shut down the AI in summer or the air conditioner? The trouble is our governments have not realised that this is a real possibility.

 

Leave a Reply