Projects
PyTorch
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
Intel nGraph (Intel)
An intermediate representation, compiler, and executor for deep learning models. It simplifies the realization of Deep Learning models across different frameworks and hardware backends.
ChakraCore
The core part of the Chakra Javascript engine having high performance and used in Microsoft Edge.
Eclipse OpenJ9 / OMR
A high performance, scalable, Java virtual machine implementation that is fully compliant with the Java Virtual Machine Specification.
Publications
FTL Model Compiler Framework
Nuro.ai Blog · 2021
Show
publication
For an autonomous vehicle (AV), fast and efficient model inference is critical. The FTL Model Compiler Framework addresses this by providing universal support for frameworks via ONNX, orchestrating graph segmentation for hardware backends like TensorRT, and optimizing with multiple sub-compiler passes.
Intel nGraph: An Intermediate Representation, Compiler, and Executor for Deep Learning
arXiv · Jan 24, 2018
Show publication
The Deep Learning (DL) community sees many novel topologies published each year. Achieving high performance on each new topology remains challenging, as each requires some level of manual effort. This issue is compounded by the proliferation of frameworks and hardware platforms. The current approach, which we call “direct optimization”, requires deep changes within each framework to improve the training performance for each hardware backend (CPUs, GPUs, FPGAs, ASICs) and requires 𝒪(fp) effort; where f is the number of frameworks and p is the number of platforms. While optimized kernels for deep-learning primitives are provided via libraries like Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), there are several compiler-inspired ways in which performance can be further optimized. Building on our experience creating neon (a fast deep learning library on GPUs), we developed Intel nGraph, a soon to be open-sourced C++ library to simplify the realization of optimized deep learning performance across frameworks and hardware platforms. Initially-supported frameworks include TensorFlow, MXNet, and Intel neon framework. Initial backends are Intel Architecture CPUs (CPU), the Intel(R) Nervana Neural Network Processor(R) (NNP), and NVIDIA GPUs. Currently supported compiler optimizations include efficient memory management and data layout abstraction. In this paper, we describe our overall architecture and its core components. In the future, we envision extending nGraph API support to a wider range of frameworks, hardware (including FPGAs and ASICs), and compiler optimizations (training versus inference optimizations, multi-node and multi-device scaling via efficient sub-graph partitioning, and HW-specific compounding of operations).
Trust prediction from user-item ratings
SpringerLink/Social Network Analysis and Mining, Volume 3 · Jan
1, 2013
Show
publication
Trust relationships between users in various online communities are notoriously hard to model for computer scientists. It can be easily verified that trying to infer trust based on the social network alone is often inefficient. Therefore, the avenue we explore is applying Data Mining algorithms to unearth latent relationships and patterns from background data. In this paper, we focus on a case where the background data are user ratings for online product reviews. We consider as a testing ground a large dataset provided by Epinions.com that contains a trust network as well as user ratings for reviews on products from a wide range of categories. In order to predict trust we define and compute a critical set of features, which we show to be highly effective in providing the basis for trust predictions. Then, we show that state-of-the-art classifiers can do an impressive job in predicting trust based on our extracted features. For this, we employ a variety of measures to evaluate the classification based on these features. We show that by carefully collecting and synthesizing readily available background information, such as ratings for online reviews, one can accurately predict social links based on trust.
Predicting Trust from User Ratings
Elsevier/The 3rd International Conference on Ambient Systems,
Networks and Technologies · Jan 1, 2012
Show
publication
Trust relationships between users in various online communities are notoriously hard to model for computer scientists. It can be easily verified that trying to infer trust based on the social network alone is often inefficient. Therefore, the avenue we explore is applying Data Mining algorithms to unearth latent relationships and patterns from background data. In this paper, we focus on a case where the background data are user ratings for online product reviews. We consider as a testing ground a large dataset provided by Epinions.com that contains a trust network as well as user ratings for reviews on products from a wide range of categories. In order to predict trust we define and compute a critical set of features, which we show to be highly effective in providing the basis for trust predictions. Then, we show that state-of-the-art classifiers can do an impressive job in predicting trust based on our extracted features. For this, we employ a variety of measures to evaluate the classification based on these features. We show that by carefully collecting and synthesizing readily available background information, such as ratings for online reviews, one can accurately predict social links based on trust.
Predicting trust from user ratings (Master’s Thesis)
University of Victoria · Jan 1, 2011
Show
publication
See more on DBLP.