Categories
News

INTELLERCE LLC Awarded Competitive Grant from the National Science Foundation

Small Business Innovation Research Program Provides Seed Funding for R&D

San Diego, CA, Aug. 2021 – INTELLERCE LLC has been awarded a National Science Foundation (NSF) Small Business Innovation Research (SBIR) grant for $256,000 to conduct research and development (R&D) work on AptStream.

The proposed Artificial Intelligence-based platform reduces the need for data centers and Content Distribution Networks (CDNs), leading to lower video streaming cost. Furthermore, by potentially reducing network congestions and increasing network resilience to failures, the resulting solution can help improve the viewers’ experience. Given the significant amount of video traffic over the Internet (80+%), a broader environmental impact of the proposed solution, if implemented widely, is expected to be a tangible reduction in the energy consumption and carbon footprint of the cloud computing infrastructures currently needed for handling these contents.

“NSF is proud to support the technology of the future by thinking beyond incremental developments and funding the most creative, impactful ideas across all markets and areas of science and engineering,” said Andrea Belz, Division Director of the Division of Industrial Innovation and Partnerships at NSF. “With the support of our research funds, any deep technology startup or small business can guide basic science into meaningful solutions that address tremendous needs.”

“NSF through this program helps us de-risk a novel technology by allowing us to conduct research on a fundamental problem while not losing sight of its commercial values.” said Dr. Omidvar, Managing Member of INTELLERCE LLC

Once a small business is awarded a Phase I SBIR/STTR grant (up to $256,000), it becomes eligible to apply for a Phase II (up to $1,000,000). Small businesses with Phase II funding are eligible to receive up to $500,000 in additional matching funds with qualifying third-party investment or sales.

Startups or entrepreneurs who submit a three-page Project Pitch will know within one month if they meet the program’s objectives to support innovative technologies that show promise of commercial and/or societal impact and involve a level of technical risk. Small businesses with innovative science and technology solutions, and commercial potential are encouraged to apply. All proposals submitted to the NSF SBIR/STTR program, also known as America’s Seed Fund powered by NSF, undergo a rigorous merit-based review process. To learn more about America’s Seed Fund powered by NSF, visit: https://seedfund.nsf.gov/

About the National Science Foundation’s Small Business Programs: America’s Seed Fund powered by NSF awards $200 million annually to startups and small businesses, transforming scientific discovery into products and services with commercial and societal impact. Startups working across almost all areas of science and technology can receive up to $2 million to support research and development (R&D), helping de-risk technology for commercial success. America’s Seed Fund is congressionally mandated through the Small Business Innovation Research (SBIR) program. The NSF is an independent federal agency with a budget of about $8.5 billion that supports fundamental research and education across all fields of science and engineering.

Categories
Science

What is AI and what can be done with it today and tomorrow?

For decades, scientists have been trying to find a way to mimic the behavior of the human brain. After all, if we can make a working “brain” in a computer, we can make as many copies of it as we want and we can quickly and efficiently scale up our capabilities. However, this has of course proven to be a big dream for quite some time and it still continues to be so today. So what exactly is meant by AI today?  

When the term AI was first introduced seven decades ago by John McCarthy, communication and computation capacities were exponentially lower than what we have today. So, when scientists were trying to design computer models that mimic the behavior of our brains, they had to manually come up with mathematical models that worked well for particular tasks such as classification of some data points. Over the time, various problems have been solved that can be considered as partially mimicking human behavior. But the rate of the introduction of these solutions used to be so slow that people tended to forget that they have been indeed revolutionary.  

This trend, however, has dramatically changed over the past few years, thanks to the exponential growths in the amount of available data, speed of communication, and power of computation and parallelization. These changes have paved the way for coming up with techniques that exploit large amounts of data and the available highly parallelized computing power to significantly reduce the manual work that was previously needed to design the aforementioned computer models and coming up with revolutionary solutions. Perhaps this is one of the reasons why we hear AI much more often than a decade ago, while the term very well existed at that time as well. 

Now let us talk about what these models are and why they are providing new and revolutionary solutions quite often lately. The models that AI practitioners are working with today are called Artificial Neural Networks or ANNs for short. ANN models are attempts at mimicking the behavior of the human brain and they have also been around for quite some time. But again, due to lack of high computing capacity and massive parallelization that exist today, we were not able to “train” very large ANNs. Today’s ANNs consist of many layers of artificial neurons – essentially performing multiplications and summations and some nonlinear functions — and are sometimes called Deep Neural Networks.  

When companies talk about using AI, what they usually mean is to use “Deep Neural Networks (DNNs)” that are today’s best attempt at modeling the behavior of our brains. Currently, the relationship between ANNs and actual brain’s behavior is far from being similar but we can say that the design of ANNs are inspired by how our brains are working.  
 

If we dive deep into the details of how these networks work, we will find deep relationships with probability theory and statistics, two closely related mathematical fields. Probability is the science of dealing with likeliness of events and expected values and statistics is the science of making sense of data with the help of probability. The objectives in statistics can be roughly divided into two categories; predicting a future data point given a past sequence of data points, or gaining insight about how different data points are related to each other. While in the first task we are usually dealing with a sophisticated and mostly “black box” model that captures the sophisticated behavior of our dataset, in the second task we are usually dealing with much simpler but very well studied models that give us a great deal of insight about what is actually happening with the data.  

We can think of DNNs as models that fall in the first category, namely sophisticated models that can capture the behavior of the data and have great predictive capabilities but are too complicated to be completely understood mathematically. Naturally, progress is constantly being made with regard to understanding how these models work, but before people can successfully understand a model, several new models have been introduced by others that perhaps perform better than the previous ones. Nevertheless, these two processes work hand in hand, the more we know about how these systems work, the better we can improve them in the next iteration.  

One of the reasons these “black box” models have been rapidly improving and solving new problems, in addition to the aforementioned reasons, is the fact that the “modules” that are discovered to work well for one task, also work well for other seemingly unrelated tasks and the general mathematical framework of one problem is usually easily extended to other problems. When working with a team of AI practitioners, however, caution needs to be taken; more often than not, problems have well studied mathematical solutions that have better performances at much lower costs compared to AI-based approaches considered by a team of AI practitioners.  

Now we can talk about some of the recent revolutionary AI-based solutions. Please note that this list is not by any means comprehensive and we only talk about a small set of recent works. 

1. Computer Vision 

Perhaps the first application of DNNs that attracted a lot of attention was the application of Convolutional Neural Networks (CNNs) for the task of object detection in an image. A CNN is a type of ANN where a mathematical operation called Convolution plays a key role. A Convolution operation can be roughly described as sliding a small box of pixels over the data and comparing the box with the data to create new outputs. CNNs showed great success by repeatedly applying this operation over the input images as well as repeatedly on the outputs of this operation. Over the years several interesting and challenging image and video processing tasks have been successfully solved by introducing variants of CNNs. Some of these tasks are as follows.   

  • Object detection where the goal is to find the location as well as the label of different objects in an image. 
  • Face recognition, where the goal is to find the name of the person whose face is present in an image. 
  • Semantic segmentation where the goal is to find the pixels of each specific object in an image. 

2. Image and Video Generation 

Shortly after the intensive use of CNNs, Generative Adversarial Networks (GANs) were introduced. While in the previous tasks the goal was to use CNNs to extract important features of a given image, in this task the goal is to generate images from a set of random features such that the generated image is as close to a real image as possible. The name GAN comes from the fact that during the training phase of these networks, two networks are pitted against each other. One network is responsible for generating the image and the other network is responsible for making sure that the output looks realistic. What is happening here mathematically, is that the Generative part of the network is capturing the probability distribution of the space of the images it is training on. Of course again this concept had been around for some time mostly using another type of ANNs called Variational Auto-Encoders (VAEs), however, the quality of the output of GANs when they were introduced were quite shocking. Nowadays, these networks produce comparable results although they are being inspired by the success of each other.  

3. Control and Strategic Decision Making 

The use of AI in control and decision making became mainstream when DeepMind, a UK company later acquired by Google, introduced its ANN for playing Atari games. More interestingly, the company’s AlphaGo project where an AI was able to beat a Go champion proved that AI can be quite efficient at solving problems that were known to be possible to solve only by humans. As the game of Go requires quite a bit of knowledge and experience as well as planning way ahead, the experts were shocked to see how the AI is able to beat champions and how it is able to introduce new and extremely sophisticated moves that were not known to us. 

The method behind training these ANNs is called Reinforcement Learning (RL). In RL, an agent is put into an environment and over the time and based on the decisions that it takes, it learns how to improve its strategy. What is happening internally is that the ANN is learning to predict the future “value” of each of its possible actions and tries to take actions that have higher future values. 

4. Natural Language Processing 

One of the areas where AI has very recently shown a great deal of success is Natural Language Processing or NLP. There are numerous tasks that can be done with NLP, however, one of the main goals in NLP is to successfully predict the next word given a sequence of words. An ANN that is capable of doing this shows that it has been able to not only find an appropriate mathematical space to represent the words but also find the relationship among them in each context. Successful recent works have been recently provided by companies such as Google and OpenAI. The work by the OpenAI team is capable of producing texts that are difficult to tell if they were written by a human or not.  

5. Predictive Analysis 

As it is clear from what we discussed above, the predictive capability of AI systems is not constrained to specific tasks. Any company with data can harness its power to better understand its business and use AI to make better decisions. Many hedge funds have been using AI to make investment decisions and many more companies are expected to use AI for important decision-making processes.  

6. Voice Recognition and Voice Generation 

We are perhaps more familiar with these tasks thanks to Siri by Apple and Alexa by Amazon. The use of AI to significantly improve voice recognition and voice generation has led to this wide application by the industry. Furthermore, nowadays almost every major company enjoys an AI customer service that is at least partly powered by AI.    

What to expect in the future? 

As the volume of data increases exponentially and the power of parallelized computation is being harnessed more than ever, it is expected that all the above-mentioned tasks enjoy at least incremental improvements in the coming years. The combination of different AI tasks is perhaps what will lead to more interesting upcoming technologies such as fully Autonomous cars and more intelligent sounding assistants.   

One of the upcoming challenges with AI, however, is that people are becoming more aware of their privacy and how their information is being traded by companies. On the one hand people would like to keep their information to themselves and on the other hand the end user’s devices are becoming more capable and AI friendly over time. So, a possible upcoming change in the near future would be the massive implementation of AI solutions on the end user’s devices and methods to preserve user’s privacy. 

Today’s AI is capable of performing impressive tasks; however, true intelligence is achieved when a single AI agent is capable of doing various tasks. As such, today’s AI is known as Weak AI and what needs to be achieved in the coming years is the so-called Strong AI or Artificial General Intelligence (AGI). 

References: 

http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

https://projects.csail.mit.edu/films/aifilms/AIFilms.html

https://openai.com/

https://en.wikipedia.org/wiki/Convolutional_neural_network

https://deepmind.com/research/publications/playing-atari-deep-reinforcement-learning

https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark