_U n

Research Focus

At Unneurotic, we're rethinking fundamental AI architectures to make them more efficient, scalable, and accessible. Our work spans from theoretical foundations to practical implementations.

We publish our findings in top conferences and often release open-source implementations of our methods to advance the field collectively.

Research Areas

PI

Parallel Inferencing

Our proprietary parallel inference architecture allows large models to be broken into segments that can be processed simultaneously across multiple devices, reducing latency by up to 70% compared to sequential processing.

Read our NeurIPS paper →
NAS

Neural Architecture Search

We've developed evolutionary algorithms that can discover optimal model architectures with 40% fewer parameters while maintaining comparable performance to hand-designed models.

ICML publication →
MQ

Model Quantization

Our adaptive quantization techniques allow models to maintain 99% of their original accuracy while reducing memory requirements by 4x, enabling deployment on edge devices.

View GitHub repository →
SD

Sparse Dynamics

We're pioneering methods that leverage the inherent sparsity in neural activations to dramatically reduce computational requirements during inference without sacrificing model quality.

Latest preprint →

Publications

Dynamic Model Parallelism for Efficient Inference of Large Transformers

NeurIPS 2024

E. Torres, M. Chen, A. Singh

EvoArch: Evolutionary Neural Architecture Search with Adaptive Complexity Regularization

ICML 2024

A. Singh, E. Torres, J. Park

AdaQuant: Gradient-Based Mixed-Precision Quantization for Deep Neural Networks

ICLR 2024

M. Chen, E. Torres, R. Kumar

Join Our Research Efforts

We're always looking for talented researchers and engineers to join our team.

View Open Positions