Trends in Artificial Intelligence

You must have read numerous articles on artificial intelligence here and we are sure by now you are now convinced that artificial intelligence (AI) revolution is here and it is here to stay.  Even before passing judgment on AI, it is important to understand the current state of technology in the field of AI.

Several newest trends in AI is picking up the pace in the modern world that we live in. We will discuss the latest AI trends here in this blog for your leisurely read.

1. Reinforcement Learning and its real-world applications

Reinforcement Learning is a relatively new field in AI which is mushrooming into the next big thing. Whenever an AI application is deployed in the real world, it could be keen on exploring its environment but needs to follow certain constraints for it to obey the limitations of that particular environment. Many research institutes have been forthcoming with applications such as Constrained Policy Optimization (CPO) where safety can be ensured while exploration.

These AI agents can also be provided with adequate training to give feedback. Again AI research institute McGlashan et. Al has proposed an algorithm called Convergent Actor-Critic by Humans (COACH) to learn from policy-dependent feedback for training robots with feedback provided non-technical users. COACH can learn various behaviors on a physical robot.

Related: Artificial Intelligence is an Infinite Game

2. Deep Learning Optimization

Methods such as batch normalization, whitening neural networks (WNN) are used to regularize deep neural networks. But then, the computational overhead of building the covariance matrix and solving SLD plays a bottleneck to apply to whitening. With a new method called Generalized Whitening Neural Networks (GWNN), the limitations of WNN can be overcome by reducing computational overhead and compact representations.

An AI research institute had proposed a Winograd style faster computation for higher dimensions optimized for CPUs. This algorithm was benchmarked against popular frameworks life Caffe, Tensorflow that supports the AVX and Intel MKL optimized libraries. An interesting insight propped out of this that the current CPU limitations are largely due to software rather than hardware.

Related: AI is Capable of Outdoing Us Humans

The increase in the number of features maps the redundancy increases leading to inefficient memory usage. The reduction of dimensionality of feature maps by preserving the intrinsic information and the reduction in the correlation between feature maps is being proposed through a method called RedCNN. Circulant matrix for projection that gives high training speed and mapping speed is used here.

3. Deep Learning Application

In the field of healthcare, sleep disorders can be diagnosed by identifying sleep patterns and abetter healthcare can be provided. Currently, this approach for identifying sleep patterns itself is cumbersome since a lot of sensors are attached to the body making the patient experience sleep difficulty, rendering the measurement unreliable. To overcome these challenges, a team from MIT researched using wireless radio frequency (RF) signals in identifying sleep patterns without sensors on the patient’s body.

The combination of CNN-RNN was used to identify patterns for sleep stage prediction. However, the RF signals produced a lot of unwanted noise and therefore they added adversarial training that would discard any extraneous information specific to any individual but retain the useful information required to predict the sleep stage. The team had achieved significantly better results (80 %) than the current method.

Related: Artificial Intelligence: Unlocks The Gate of Possibilities

4. Meta-Learning

A model called Model Agnostic Meta-Learning (MAML) creates a model with parameters learned from random sampling over the distribution of tasks.  The model can be adapted to new tasks using some pieces of training and interactions also called few-shot learning. Here researchers demonstrated MAML’s application over the classification, regression, and reinforcement learning tasks.

5. Sequential Modeling

In many sequences like phrases in human language or group of letters in identifying phonotactic rules, the segmental structure follows a natural pattern. Facebook AI Research (FAIR) uses Convolutions for Sequence to Sequence Learning where they created hierarchical structures using multi-layer convolutions. In this way, they replicated the long-range dependencies captured in traditional LSTM based architectures part from using gated linear units, residual connections, and attention in every decoder layer.

6. Machine Learning Optimization

Just a few years ago, Microsoft research India came up with powerful tree-based models that helped Machine learning in resource constraint devices like the Internet of Things (IoT) with as little as 2KB RAM. Gradient Boosted Decision Trees (GBDT) performs well for classification problems. But the problem here is the output space of multilabel classification becomes high dimensional and scant, the GBDT algorithms suffer memory issues and long running times. To cut this clutter, the GBDT-Sparse algorithm was proposed to handle the high dimensional sparse data.

7. Natural Language Generation Architectures

Latent Intention Dialogue Model for learning the intention using latent variable and then composing suitable machine responses came into the picture to overcome the limitations of discriminative models for natural language text generation.  The idea behind this research was the representation of latent intention distribution as an intrinsic policy to reflect human decision-making. It is learned using policy gradient-based reinforcement learning.

Related: Artificial Intelligence & the science of Future of Language Learning?