Artificial Intelligence: Where Are We Today?
Published on : Sunday 04-02-2024
How much AI is in the world today? PV Sivaram explores the present status of AI in industry and the fears arising from it.
In the development, deployment and acceptance of AI, where are we today?
State of activity
Of the various technical advancements of technology in this century (or indeed in the new millennium), maximum hype is showered upon Artificial Intelligence. So, what is happening?
Everything, Everywhere, All at Once.
If asked to describe the activities in exploration and exploitation of AI today, this would be an apt description. The range and intensity of exploration is huge.
At a high level we can identify six areas:
· Machine Learning
· Pattern Recognition
· Robotics
· Predictive analysis
· Machine vision, and
· Natural language processing.
Huge amounts of papers are being published on the above topics. Much more research and development is happening in corporate laboratories, and not yet visible to the public eye. The work is frenetic, and yet fragmented. Much development is going on, not in response to a user need, but because it is possible.
In essence there are two broad categories of AI work. One category aims to mimic human competence. This is called Human Like AI. This approach leads to displacement of humans from some work which they do, much like automation in the previous Industrial revolution. Technology relieves humans from doing work which they cannot do, should not do, or do not like to do. Where does this apply?
Work which humans cannot do – applies to such tasks where the calculation power or memory is needed of an order where a single human cannot provide, and it is not easily possible to divide the work among more people because the effort of collaboration is too large.
Work which humans should not do – refers to such work which is repetitive and boring, and causes exhaustion. Such work is actually routine and monotonous, with no challenges to the human brain. Such monotony can lead to actual distress and harm.
Work which humans do not like to do – this work is apart from the previous section. This is work which people feel offends their beliefs, faith, moral values, societal norms and so on.
Extending this thought further would be work, which is prohibited by law. And this is a topic for law-makers in all the countries. There is not a consensus on what should be defined as illegal. This does not even touch upon the philosophical aspects of immoral, unfair, biased and so on.
There is another approach to AI – Human Augmenting AI. This is the approach which works alongside and under the control of humans, enhancing their capability and productivity. In terms of Automation, this kind of AI can be like tools, whereas the other approach of Human Like AI can be similar to machines. Generative AI as is popularised by ChatGPT is a good example of this kind.
Exploration
As seen above, at a high level we can identify six areas which are being explored in the domain of AI-machines:
· Machine Learning
· Pattern Recognition
· Robotics
· Predictive analysis
· Machine vision, and
· Natural language processing.
But if we can drill down, we will discover some more areas. Good numbers of research areas are also about discovering stuff to help the AI developers to develop
faster and more efficiently.
Beyond machine learning, there is work going on about AI which can do research on AI.
State of knowledge
How much AI is in the world today? In a scenario developing so rapidly, it is not possible to answer this question accurately. However, an estimate is that over 1.5 billion devices are deployed today using AI. The applications which power these devices are well into several thousands.
This is the first item of lack of knowledge that we do not have a system to codify and count the applications.
At a different level, we could look at AI from another perspective:
· Narrow AI – designed to execute specific tasks
· General AI – which could perform tasks based on past experience, and
· Super AI – which could take up and execute tasks autonomously.
As of date, only the first, Narrow Artificial Intelligence, is deployed in any practical application. Some work is going on in General AI, and Super AI is still a hypothesis.
Expertise
Many work areas have moved from hypothesis stage to application developments. Some of the prominent areas where good progress is visible are:
· Game playing automata (Chess, Go )
· Advanced Robotics
· Visual inspection (quality, surface finish)
· Face recognition
· Self-driving cars and trucks, and
· Driverless taxis.
And of course the latest rage, ChatGPT and similar tools.
Whereas the skills and expertise developed in these applications can be of great use for manufacturing enterprises, we still have to wait some more before we see the first application use cases.
More work is going on in terms of developing platforms and tools for development; the number of applications is relatively small. Indeed when we look from the point of view of the manufacturing industry, not much has come to light.
One reason might be that work on AI is concentrated in computer science IT programming labs, and there is not much collaboration with manufacturing disciplines. We notice that many people in manufacturing are queuing up for courses in programming in an effort to bridge the gap. But for infusion of AI into tools and practices of production engineering, the need for programming should be done away.
Three things are needed for developing good applications in AI.
First, ideas – and this needs domain experts to be involved at the development stage.
Second is programming, and IT experts are needed for this part.
Third is data. Data is the need during development and also during actual deployment. But the data which is made available during development (called training data) can strongly influence the AI algorithms. If the data is biased, then the decisions reached by the AI machine will also be biased.
How does data get biased? It could be from the source of data from where the developer obtains data, and this could be biased in terms of geography, from the collaborative industries, from the purpose for which the data was originally captured. For example if the data is sourced from hospitals, it could be biased towards people who are unwell. If it is from an economically weaker section, the measurements could be different.
We must acknowledge that these considerations do not appear to be weighty for the manufacturing sector.
Expectations as on today
We would wish for many AI-enabled machines which relieve humans from doing tasks of a nature which we guess humans do not like to execute. Each such machine would be fine-tuned to perform one task at high speed and with good quality.
We would like to see in the next few years many manufacturers coming up with machines for similar tasks, so that there is healthy competition. We should have a choice of different designs and features.
Going further, we would like possibilities for these machines to work in tandem and communicate with each other. This would enable collaboration at a machine-to-machine level.
Such machines would truly represent cyber-physical systems in the sense of Fourth Industrial Revolution.
Fears
Collaborative AI
One of the fears that we have universally is about collaborative AI, where the terms of collaboration between machines are not under the control of humans. There are of course benign examples like two such machines playing chess against each other and in very short time reaching incredible levels of proficiency. But sometimes such capability can be worrying and frightening.
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.
Hallucination
An AI hallucination occurs when a generative AI model produces inaccurate information as if it were correct. These hallucinations can be caused by a variety of factors, including:
· Insufficient training data
· Incorrect assumptions made by the model, and
· Biases in the data used to train the model.
AI hallucinations can result in producing content that is wrong or even harmful. For example, asking a generative AI application for five examples of bicycle models that will fit in the back of your specific make of sport utility vehicle.
Because generative AI is so convenient, it is possible that the recommendations are used for decision making without serious double-check.
Deepfake
Deepfakes use artificial intelligence (AI) to create realistic-looking but fabricated content, often videos or images. They involve training a model on large datasets of real faces and then using that model to generate or manipulate content. Deepfakes can be potentially used for propaganda or misleading people to make decisions which are detrimental.
Deepfake technology has been used in the entertainment industry to enhance various aspects of content creation and production. For example, deepfake technology can generate highly realistic computer-generated imagery (CGI) for movies and TV shows.
Deepfake AI also allows free manipulation of clips and images businesses and marketers need to send their customers more relevant content and experiences.
Hackers
Like everyone else, we fear AI can help hackers to make more sophisticated attacks which are more difficult to detect and prevent. Going a step further, it is feared that AI engines may be developed which can tirelessly make attempts at hacking, without any motive behind them. As we all know, motiveless attacks are the most dangerous.
Biased data
Data bias in machine learning is a type of error that occurs when some elements of a dataset are given more weight or representation than others. This can lead to skewed outcomes, low accuracy levels, and analytical errors.
Machine learning bias is also known as algorithm bias or AI bias. It occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.
For example, if a dataset contains mostly images of white men, then a facial-recognition model trained with these data may be less accurate for women or people with different skin tones.
Here are some other impacts of data bias on machine learning:
· Unbalanced: A biased dataset is unbalanced and fails to represent the original motive of the machine learning model.
· Inaccurate results: A biased dataset produces inaccurate results and prejudiced decisions.
· Varying precision: A biased dataset's precision level can vary under different contexts.
· Systematic prejudice: A biased dataset leads to systematic prejudice.
Artificial Super intelligence
Artificial superintelligence (ASI) is a software-based system with intellectual powers beyond those of humans across a comprehensive range of categories and fields of endeavor. ASI doesn't exist yet and is a hypothetical state of AI.
The AI we use today is narrow AI – each machine is trained to perform one specific task, like suggesting the best possible route in context of real time traffic conditions.
Theoretically, ASI's superior capabilities would apply across many disciplines and industries and include cognition, general intelligence, problem-solving abilities, social skills and creativity.
Ethical and moral considerations
As AI sources data from many places, there is a possibility that some ethical lines may be crossed. Equally questions of ownership, protection of IPR etc also get thrown up.
(This is Part 2 of a 3-part series on Artificial Intelligence. Part 1 appeared in the January 2024 edition. Part 3 would appear in March 2024)
PV Sivaram, Evangelist for Digital Transformation and Industrial Automation, is mentor and member of steering committee at C4i4. He retired as the Non-Executive Chairman of B&R Industrial Automation and earlier the Managing Director. He is a past President of the Automation Industries Association (AIA). After his graduation in Electronics Engineering from IIT-Madras in 1976, Sivaram began his career at BARC. He shifted to Siemens Ltd and has considerable experience in Distributed Systems, SCADA, DCS, and microcontroller applications.
Sivaram believes strongly that digitalisation and adoption of the technology and practices of Industry4.0 is essential for MSME of India. He works to bring these concepts clearer to the people for whom it is important. He believes SAMARTH UDYOG is nearer to the needs of India, and we must strike our own path to Digital Transformation. Foremost task ahead is to prepare people for living in a digital world. He is convinced that the new technologies need to be explored and driven into shop floor applications by young people. We need a set of people to work as Digital Champions in every organisation.
__________________________________________________________________________________________________
For a deeper dive into the dynamic world of Industrial Automation and Robotic Process Automation (RPA), explore our comprehensive collection of articles and news covering cutting-edge technologies, robotics, PLC programming, SCADA systems, and the latest advancements in the Industrial Automation realm. Uncover valuable insights and stay abreast of industry trends by delving into the rest of our articles on Industrial Automation and RPA at www.industrialautomationindia.in