What’s the Difference Between AI and Regular Computing?

AI robot with a tablet device on their chest
Photo by Owen Beard on Unsplash

Introduction

November 30th 2022 is a date permanently etched into the history books. For those unaware of its significance, this was when OpenAI released to the public their free-to-use, large language model chatbot, ChatGPT. By January 2023, it had over 100 million users, making it the fastest-growing consumer software application in history.

The release of ChatGPT, combined with the speed at which it was adopted across the globe, was a transformative milestone within the long history of AI – the moment AI went mainstream. AI, no longer the preserve of researchers and technology industries, became a tool and technology that permeated both global culture and public consciousness overnight. People were wowed by its ease-of-use, quality of results, and the potential of its future usage.  

For the last year, AI has seemingly been in the news every day. New generative AI platforms, such as MidJourney, have launched. Opinion pieces have heralded potential future work-place efficiencies. Meanwhile, the real-world impacts, as well as ethical considerations, have been fiercely debated. A year on from ChatGPT’s release, it looks unlikely that AI usage within society will recede.  

As we mark ChatGPT’s 1st anniversary, it’s worth reflecting on how artificial intelligence differs from what we consider to be regular computing. How do these technologies differ in their unique characteristics, what are their respective futures and what are the important ethical considerations? 

Defining the Basics 

Before we consider the differences between artificial intelligence and regular computing, it’s worth taking a moment to define what these terms mean.  

Regular computing, often referred to as traditional or classical computing, uses algorithms to perform specific tasks. The execution and completion of tasks within regular computing relies on a set of predefined instructions being pre-programmed to allow for the processing of data and production of desired outcomes. To use a cooking analogy, you can ask the computer to bake a cake, but it will only be able to complete the task if you have given the computer the recipe to follow. Regular computing is reliant upon pre-determined instruction, and it will only produce a result if you have told it how to achieve said result. 

Artificial intelligence (AI) systems are designed to mimic human intelligence. Machines learn, adapt, and make decisions based on data without explicit programming. Machine learning is a subset of AI that has gained prominence, emphasising the ability of systems to improve their performance through experience, increasing the efficiency in which tasks are completed through repetition and exposure to said task. Using our cake analogy once more, rather than pre-programming a specific cake recipe, you’d feed your AI programme with multiple different cake recipes (this being the raw data). In response to the task ‘bake a cake’, the AI would generate what it thought to be the most appropriate steps to follow based on the recipes in the input data set.  

A child holding the hand of a humanoid robot
Photo by Andy Kelly on Unsplash

The Key Distinctions Between Regular Computing and AI 

Learning and Adaptation: 

  • Regular Computing: Traditional computing operates on fixed instructions. The programme executes a predefined set of commands without the ability to adapt or learn from new information. 

  • AI - particularly machine learning - excels in learning from data. Algorithms iteratively improve their performance, making predictions or decisions based on patterns identified in massive datasets. 

Flexibility and Problem Solving: 

  • Regular Computing: Traditional systems are proficient at solving specific problems for which they are programmed. Their utility extends to a wide array of applications but remains confined to predefined tasks. 

  • AI – AI thrives in dynamic environments, adapting to unforeseen challenges. The ability to generalise knowledge allows AI systems to tackle diverse problem sets, often outperforming traditional computing in complex, ambiguous scenarios. 

Decision-Making: 

  • Regular Computing: Decisions in traditional computing are deterministic, following predefined rules without the inherent capacity for nuance or context awareness. 

  • AI: Decision-making in AI involves probabilistic reasoning. Machine learning models evaluate probabilities based on patterns in data, providing a nuanced approach to decision-making that can be more akin to human cognition. 

Human-Like Capabilities: 

  • Regular Computing: Traditional systems lack the capacity for human-like reasoning, learning, or understanding. They can be powerful tools but don’t attempt to emulate cognitive functions. 

  • AI: Artificial intelligence aims to replicate and augment human cognitive abilities. Natural language processing, image recognition, and even creativity (especially generative AI applications such as MidJourney, DALL-E, Adobe Firefly) are within the realm of AI applications. 

The Future of Regular Computing and AI

It’s impossible to predict what the future holds for both regular computing and AI as developments in both are happening at lightning speed. From the position of both a user and consumer of both technologies, regular computing and AI feel as though they will converge. We are already seeing this, with the incorporation of auto-complete features in most Office applications and AI assistants playing an increasing role in most software.  

It seems likely that further developments of hybrid systems, combining the precision of traditional computing with the adaptability of AI, will emerge. Developments within the field of Quantum computing might help facilitate this by providing increased computational power to both AI and traditional computing.

A robot with a human face. Its arms and body show its wires and circuits
Photo by Maximalfocus on Unsplash

Ethical Considerations

The integration of AI into various facets of our lives raises various ethical considerations. Issues such as bias in AI algorithms, job displacement due to automation, and the responsible use of AI in decision-making processes, demand thoughtful reflection.  

AI requires very large datasets to learn from. How and where these datasets are collected raises issues around privacy and ownership of digital information. Furthermore, these datasets often require additional processing which is typically outsourced to low-income countries, where wages are small, and workplace conditions undesirable. Additionally, the responsible use of AI in decision-making processes raises questions about how we should align AI values with human ones, and crucially, which human values we should align them to. 

Lawsuits, such as the copyright infringement claims made by Universal Music against the artificial intelligence start-up Anthropic, are just one instance in which the ethics surrounding creativity have been called into question. In some respects, AI has democratised creative output by removing the need for natural creative talent. However, this increased ability to ‘create’ continues to raise issues over Intellectual Property and ownership around original works of art, music, and prose. 

Conclusion

Traditional computing remains indispensable in our daily lives, providing the backbone for routine tasks. However, the transformative capabilities of AI open new frontiers, enabling machines to not just process information but to comprehend, learn, and make decisions. 

A year on from the release of ChatGPT the relationship between AI and regular computing continues to become increasingly intertwined.  

Become an Ri member

and gain entry to the CHRISTMAS LECTURES ticket ballot, as well as many other benefits including discounts on all our event tickets. And attend all our Discourses for free!

Find out more