XNOR is a spinoff of Paul Allen-backed A12. Its product is fundamentally a proprietary technique of interpreting machine learning models including tasks that can be executed rapidly by almost any processor.
Additionally, the power savings, memory and speed are massive, allowing devices, particularly those with bargain-bin CPUs to carry out activities such as real-time object tracking and recognition, which normally require serious processing power.
Since its launch, XNOR acquired $2.6 million in seed financing. Recently, the startup completed its series A round of funding, led by Madrona Venture Group alongside other companies such as Catapult Ventures, Autotech Ventures and NGP Capital.
According to Ali Farhadi, the co-founder of XNOR, artificial intelligence (AI) has done a commendable job. However, for it to become a revolutionary technology, it requires scaling beyond its current status.
Farhadi also pointed out that the primary challenge is that artificial intelligence is too costly in terms of the money and processing time needed. To put this matter into perspective, almost all leading AI products do their operations through numerous computers in the cloud. As such, when you send an image, for instance, an AI product processes it with a machine learning model that is hosted in a given data center, before you receive the results.
Getting a response from Alexa in one or two seconds may seem okay. Nevertheless, when you require results in a fraction of a second, cloud processing may not help. For this reason, XNOR uses a technique that enables technologies such as voice recognition and computer vision to be stored and operated on devices that have extremely limited RAM and processing power.
For XNOR, the data stored on its device is not shared with other parties unlike in the case of the cloud. This featured is a plus for the startup. Coming up with a model for edge computing is expensive, and even though the number of AI developers is growing, only a few are trying to operate on resource-limited devices such as cheap security cameras or old phones.
XNOR’s model allows a manufacturer or developer to plug in several basic features to get a model to be pre-trained to execute their requirements. For instance, if you are a parking lot operator or company that needs to identify license plates, people lurking suspiciously, and empty spots, this is the model for you.
According to Farhadi, XNOR has already identified the most common devices and use cases via research and feedback in phase 1 of the technology. Stage 2, which will start in 2019, will allow more customization whereas phase 3 will include using models that regularly work on cloud infrastructure and adapting them for edge deployment.