R&D AI Engineer Austin, TX​/Remote Office

Remote Full-time
Position: Staff R&D AI Engineer Austin, TX / Remote Office About Us We are establishing the first distributed AI infrastructure dedicated to personalized AI. The evolving needs of a data‑driven society are demanding scalability and flexibility. We believe that the future of AI is distributed and enables real‑time data processing at the edge, closer to where data is generated. We are building a future where a company's data and IP remains private and it is possible to bring large models directly to consumer hardware without removing information from the model. Role Overview As a Staff R&D AI Engineer, you will lead the development of cutting‑edge AI systems that bridge computer vision, natural language understanding, and action learning. You'll architect and implement Vision‑Language‑Action (VLA) models, advance reinforcement learning applications, and push the boundaries of multimodal AI integration. This role combines deep expertise in both computer vision and large language models with hands‑on experience in reinforcement learning to create intelligent systems that can understand, reason about, and interact with complex environments. You'll drive research initiatives, mentor technical teams, and translate breakthrough AI research into practical applications across diverse domains. Key Responsibilities • Design and develop Vision‑Language‑Action (VLA) models that integrate visual perception, natural language understanding, and action prediction • Architect and implement reinforcement learning systems for sequential decision‑making, including policy learning and skill acquisition • Build and optimize computer vision pipelines for perception tasks, including object detection, segmentation, tracking, and scene understanding • Develop and fine‑tune large language models for instruction following, reasoning, and task planning applications • Implement RLHF (Reinforcement Learning from Human Feedback) systems to improve model alignment and safety • Create multimodal training pipelines that leverage synthetic and real‑world data for robust model performance • Research and prototype novel AI architectures that combine vision, language, and action learning • Collaborate with engineering teams to integrate AI models into applications and validate performance across domains • Optimize model inference performance for real‑time applications across edge and cloud deployments • Lead technical initiatives, mentor junior AI engineers, and establish best practices for AI model development • Stay current with latest research in VLA models, multimodal AI, and robotics to drive innovation roadmap • Present findings at conferences and publish research to advance the field Qualifications & Skills • 7+ years of experience in AI/ML engineering with 4+ years focusing on deep learning and neural network development • Strong understanding of reinforcement learning algorithms and their applications (PPO, SAC, TD3, etc.) • Strong expertise in both computer vision and natural language processing with hands‑on model development experience • Proficiency in PyTorch and/or Tensor Flow with experience training and deploying large‑scale models • Experience with transformer architectures, attention mechanisms, and large language model fine‑tuning • Hands‑on experience with computer vision tasks including object detection, semantic segmentation, and visual tracking • Strong programming skills in Python with experience in distributed training and model optimization • Understanding of sequential decision‑making and control systems fundamentals • Experience with MLOps practices including model versioning, monitoring, and deployment pipelines • Proven ability to work independently on complex research problems and deliver practical solutions • Strong communication skills and experience collaborating with cross‑functional engineering teams Preferred Qualifications • PhD in Computer Science, Robotics, AI/ML, or related field with focus on multimodal learning or robotics • Direct experience developing or working with Vision‑Language‑Action (VLA) models or similar multimodal architectures • Experience with RLHF implementation and human feedback integration for model alignment • Background in imitation learning, inverse reinforcement learning, or learning from demonstrations • Ex… Apply tot his job Apply tot his job
Apply Now

Similar Opportunities

[Remote] Engineering Manager – SRE (Site Reliability Engineering)

Remote Full-time

Urgently Hiring: Hardware Engineer, Microwave Design, Quantum AI

Remote Full-time

Engineering Manager - AI DevOps

Remote Full-time

ZSG is hiring: Computer Vision Hardware Engineer (Remote) in Salt Lake City

Remote Full-time

Sr Advanced Electronic Hardware Engineer

Remote Full-time

[Remote] Hardware Engineer PhD (Intern) - United States

Remote Full-time

Senior Software Engineer – HPC, AI Advanced Development

Remote Full-time

2026 Summer Intern, MS/PhD, ML Compute, Hardware Engineer

Remote Full-time

Hardware Engineer for AI-Powered Voice Device Prototype

Remote Full-time

Language Model Reviewer (Remote)_

Remote Full-time

Experienced Customer Service Representative – Remote Opportunity in Florida for Delivering Exceptional Pet Care Experiences

Remote Full-time

Data Modeler-Erwin

Remote Full-time

Experienced Senior Sales Executive – Media Solutions and Advertising Sales for blithequark's Flagship Channels

Remote Full-time

Senior Cloud & Automation Engineer

Remote Full-time

Experienced Data Entry Specialist for Remote Work at blithequark - $30/Hour

Remote Full-time

RN Registered Nurse (Pediatric) - Flexible Work Schedule with Supportive Environment (LUBBOCK)

Remote Full-time

Monitor Technician - Virtual Care Center (F/T Nights)

Remote Full-time

Continuous Improvement Manager- Remote Job at Fresh & Ready Foods in Charlotte

Remote Full-time

Authorization Specialist III #Full Time #Remote

Remote Full-time

Aircraft Records Clerk IV (Remote)(Deployable) Remote / Telecommute Jobs

Remote Full-time
← Back to Home