The energy consumption problem of artificial intelligence will have two major challenges. How to overcome Qualcomm?

At the moment, artificial intelligence has penetrated into every segment of the economy and industry, and many products have already possessed the capability of artificial intelligence. This is a positive side. But at the same time, it also brings us some serious problems and challenges. With the development of artificial intelligence, energy consumption is getting more and more serious. Some data predict that by 2025, global data centers will consume 20% of all available electricity in the world.

In addition, the energy consumption of deep neural networks is also proportional to its size. According to the data, by 2025, the continued development of neural networks is expected to expand its scale to 100 trillion parameters, equivalent to the capacity of the human brain. Such a scale of neural networks will consume a lot of energy. The energy efficiency of the human brain is 100 times higher than the energy of today's best hardware, so we should be inspired by the brain to develop more energy efficient artificial intelligence technology.

The energy consumption problem of artificial intelligence has two major challenges

According to Wellings, vice president of Qualcomm technology, there are two important challenges to the energy consumption of artificial intelligence. First, the economic value and benefits created by artificial intelligence must exceed the cost of running the service, otherwise people will not be profitable, and these excellent artificial intelligence technologies developed by people will be useless. Whether it's sorting by user preferences on social networks, or personalized ads and recommendations, its application costs need to be controlled within a certain range. In addition, artificial intelligence is also applied to large smart cities and smart factories, as well as cost control.

The energy consumption of artificial intelligence will have two major challenges. How can Qualcomm overcome?
Wei Lingsi, Vice President of Qualcomm Technology

Second, the artificial intelligence energy efficiency problem is also a big challenge, because there are restrictions on heat dissipation in the edge side, that is, in the mobile environment. For example, we can't run high-energy tasks on our phones, or the phone will get very hot. But at the same time, we need to deal with a large number of artificial intelligence workloads, including the completion of very intensive computational analysis tasks, the handling of complex concurrency, that is, the completion of multiple tasks at the same time, the need to ensure real-time and always open, the mobile environment has A variety of restrictions, such as the size of the terminal is small, but also need to ensure long-term battery life to support all-day use. In addition, due to the size, the memory and bandwidth of the mobile terminal are also limited.

“So, from the perspective of economic efficiency or thermal efficiency, we must reduce the energy consumption of artificial intelligence operation.” Wellings concluded: “I think the future artificial intelligence algorithm will not be much better than the intelligence it can provide. To measure, but to see how much intelligence this algorithm provides per watt-hour, this will become an important measure of future artificial intelligence algorithms. In this regard Qualcomm has great advantages, low-power computing is what we have always been What I am good at."

Qualcomm makes deep learning more efficient on the terminal side

Deep learning is an important change in the development of artificial intelligence. Driven by the great development of neural networks, deep learning significantly improves the accuracy of prediction. In addition, Wellings said that we should not manually define features from a large amount of raw data such as sounds, signals, etc., but should let the algorithm learn the extracted features from the original data. This is a huge breakthrough. The advantages of neural networks include the ability to automatically detect objects, share parameters very efficiently, make some data more efficient, and perform quickly on modern hardware.

Of course, deep learning also has areas for improvement. In Wei Lingsi's view, the most important point is that convolutional neural networks use too much memory, computing power and energy, which is urgently needed for improvement. In addition, the neural network does not have rotational invariance, cannot quantify uncertainty, and it is easily fooled by slight changes on the input side.

In response to these challenges, Qualcomm has done a lot of work. Inspired by the human brain, Qualcomm has begun research on pulsed neural networks more than a decade ago, which is also a way to achieve low-power calculations. Now inspired by the human brain, Qualcomm is considering the use of noise to achieve low-power calculations for deep learning.

Wellings explained: "The human brain is actually a noisy system, it knows how to deal with noise. I believe that we can further use noise to bring benefits to the neural network. In the professional field, we call this method Deep learning, this is also an important basic framework for us to achieve this. Through Bayesian deep learning, we compress the neural network to a smaller size, so that it can run on the Snapdragon platform more efficiently. We also use This framework quantifies the bits of computational processing we need to perform."

When talking about how these noises help us compress and quantify, Wellings says that we introduce noise into the neural network, which in turn affects the various parameters and connections, and then these perturbed parameters propagate the noise to the active nodes, ie the individual neurons. . If these neurons are full of noise, don't store any information, or don't play any role in predicting, we'll crop it. By tailoring the neurons, the neural network will become smaller and will run faster on both the computer and the Snapdragon platform.

I just talked about using the Bayesian framework for compression and quantification, and in fact it can solve many other problems. If the neural network has only been trained for a certain scene, for example, an autonomous car has only received training in a certain city, now that the car has arrived in another new city, you can use Bayesian deep learning. Generalization. Qualcomm's idea is that the smallest and simplest model that can explain the data is the most suitable model. This is the Occam razor. Bayesian learning can also help us generate confidence estimates that quantify the uncertainty of neural networks. When we add noise, the noise will propagate to the prediction, producing a predicted distribution interval, thereby completing the quantification of the prediction confidence. Finally, Bayesian learning can help us not be vulnerable to confrontational attacks, that is, through slight changes in the input side to get different predictions. It also helps protect the user's personal privacy, because data information can be converted to model parameters and even refactored, making the data privacy-sensitive. So by adding noise, it helps us to protect our privacy. In general, Bayesian deep learning can well solve many of the challenges faced by deep neural networks.

“As the compression ratio increases, the performance advantages of Bayesian deep learning compared to other methods become more obvious, and its operation on mobile platforms is more efficient, which is why we think Bayesian deep learning is especially Suitable for moving scenes.” Wellings goes further.

In addition, according to reports, the current Qualcomm heterogeneous computing system includes three components, namely CPU, GPU and DSP. For more than a decade, Qualcomm has continued to improve these three components in multiple dimensions during each product development cycle. For example, in the caching structure, Qualcomm continuously optimizes the way memory works; continuously optimizes accuracy, optimizes accuracy with minimum energy consumption; optimizes computational management, for example, when there is a computing task, you can choose Let the GPU, CPU or DSP do it, or let all the components work together. Although only the calculations on a single terminal are currently managed, Qualcomm has a far-reaching vision. In the upcoming 5G era, Qualcomm will operate computing in the entire network consisting of terminals and clouds in the context of the Internet of Everything. , bringing a powerful artificial intelligence system to the edge of the network. ”

Qualcomm's three-tier efforts to accelerate artificial intelligence research

Qualcomm has also made great efforts to accelerate the research of artificial intelligence, including optimization and improvement of computing architecture, memory hierarchy and usage level. In terms of computing architecture, Qualcomm focuses on optimizing instruction types and parallelism, as well as optimizing the precision required to run calculations. Bayesian deep learning can help achieve optimal operational accuracy. Equally important and even more important is the memory hierarchy. It is estimated that the power consumption of migrating data from DRAM or migrating data to DRAM is 200 times that of ALU OperaTIon. Therefore, it is necessary to optimize the memory level to reduce the power consumption of data movement. At the usage level, Qualcomm is committed to optimizing hardware, software, and compilers to reduce computational redundancy and maximize computational throughput and memory bandwidth.

Wellings added: "An ecosystem of hardware, software, and algorithms is critical to us. Efficient hardware will evolve to accommodate new algorithms in the field of artificial intelligence."

He further said: "We focus on how to run convolutional neural networks more efficiently on hardware, and to develop more efficient new hardware for efficient operation of neural networks. In terms of algorithms, ensure that algorithms run efficiently on the Opteron platform. All of this needs to be done through software tools, which is our Snapdragon Neuroprocessing SDK. You can think of software as a bridge between hardware and algorithms. For example, you build your favorite on the Snapdragon platform. Model, or perform your favorite artificial intelligence tests, when you put the model into the Neuroprocessing SDK, these available software tools will help you compress and quantify to ensure your model or test runs efficiently on the Opteron platform ."

2.54mm Pitch

2.54mm Pitch

2.54mm Pitch

ATKCONN ELECTRONICS CO., LTD , https://www.atkconn.com

Posted on