Why Artificial Intelligence Takes Much Time for Machines to Learn With High Energy?

Artificial Intelligence

Risk is the enormous carbon footprint of developing Artificial Intelligence Technology. Some estimates, train an AI model that generates as much carbon emissions as it takes to build and drive cars over their lifetimes.

Inefficient Training

Traditional data processing jobs done in data centres include video streaming, email and social media. AI is computationally intensive as it needs to read lots of data until it understands.

Modern AI uses artificial neural networks with mathematical computation that mimics neurons in a human brain. The strength of the connection of each neuron to its neighbour is a parameter of the network called weight. This language understands by learning, the web starts with random weights and adjusts them until output agrees. 

How are artificial neural networks working?

The standard way of training a language network is by feeding with lots of text from websites and news outlets with some masked words. 

Initially, this model gets them wrong, but, after many rounds of adjustments, these connection weights start to change and pick-up patterns in the data. The network eventually becomes accurate. The Effect of Artificial Intelligence from Fantasy to Reality

The recent model is Bidirectional Encoder Representations Transformers (BERT) use 3.3 billion words from English books and Wikipedia article. 

During BERT training data read, that set not one time, but four times. An average child compared learning to talk will hear forty-five millions of words at the age of five, which is three thousand times on BERT.

Exact design

It makes language models even more costly to build that this training process happens many times during the course of development. 

The researchers want to find the best structure for the network such as 

  • How many neurons?
  • How many connections between neurons?
  • How fast should the parameters be changing during learning?

More combinations tried, the better the chance that the network achieves high accuracy. 

In contrast, human brains do not need to find an optimal structure that comes with a pre-built design honed by evolution.

As companies and academics compete with AI space, the pressure is on to improve on state of the art. 

Even achieving one rates improvement in accuracy on complicated tasks like machine translation considered significant leads to good publicity and better products. 

But getting that one rate improvement with one researcher might train a model for thousands of times. Each time a different structure used until the best one acquired.

AI language models developed with an estimation in the cost of the

energy. These models measure the power consumption of standard hardware used for training. It founded that training BERT once has the carbon footprint of a passenger flying a round trip.

However, searching using different structures trains an algorithm for multiple times on the data with a different number of neurons, connections and other parameters. This cost becomes an equivalent.

Most Useful AI Models

AI models are much more significant than they needed to grow more extensive year after year. 

Researchers concluded that having more extensive network leads to better accuracy, in a tiny fraction of the network ended-up in use. 

Similarities happen in children’s brain when neuronal connections are added and then reduce. The biological brain is much more energy-efficient than computers.

AI models trained on specialized hardware like graphics processor units draw more power than traditional CPUs. Suppose you own a gaming laptop, which has one of these graphics processor units to create advanced graphics. It means developing advanced AI models is added at large carbon footprints. 

Unless it switched to a hundred rate renewable energy source, AI developments stand at odds to reduce green-house emissions leads to a slowdown in climate changes. 

The financial cost of development also becoming high that only a few selected labs are affordable do it, that will be a one to set agenda for different kinds of AI models to get developed.

More work with fewer resources in AI.

What is the future of AI research? 

The cost of training will come down as more efficient training methods are invented. 

Similarly, while data centre energy use predicted to explodes due to improvements in data centre efficiency, hardware efficiency and system refresh.

There is also trade between the cost of training a model and the cost of using them results in spending more energy at training time to come up with a smaller model which makes cheaper. 

A model used many times in its lifetime, that can add up to considerable energy savings.

AI models are smaller by sharing weights or using the same consequences at multiple parts of a network are shapeshifter networks, a small set of importance reconfigured into an extensive network of any shape. 

Other researchers have shown their weight-sharing with better performance for the same amount of learning period.

AI community invests more in developing energy-efficient training schemes. Otherwise, this risks having AI that becomes dominated by a few selects which is affordable to set the agenda including:

  • Which types of models developed?
  • Types of data used to train them
  • What the models used for?
  • 2230 Views

Leave a Reply

Your email address will not be published. Required fields are marked *