Home

Blog

Certifications

Openings

Refer & Earn

Help


G

G

GeekyAnts India Pvt Ltd

Services

251 - 500

Employees

4.5

Reviews

Bengaluru, Karnataka

Location

About company

GeekyAnts is a design and development studio that specializes in building solutions for web and mobile that drive innovation and transform industries and lives. They hold expertise in state-of-the-art technologies like React, React Native, Flutter, Angular, Vue, NodeJS, Python, Svelte and more.

GeekyAnts has worked with around 500+ clients all across the globe, delivering tailored solutions to a wide array of industries like Healthcare, Finance, Education, Banking, Gaming, Manufacturing, Real Estate and more. They are trusted tech partners of some of the world's top corporate giants and have helped small to mid-sized companies realize their vision and transform digitally. They are also the registered service suppliers for Google LLC since 2017.

They provide services ranging from Web & Mobile Development, UI/UX design, Business Analysis, Product Management, DevOps, QA, API Development, Delivery & Support and more.

In addition to that, GeekyAnts is the brains behind React Native's most famous UI library; NativeBase (15000+ GitHub Stars), BuilderX, Vue Native, Flutter Starter, apibeats and hold numerous other Open Source contributions to their name. GeekyAnts has offices in India (Bangalore) and the UK (London)

AI/ML- (Mumbai)

Posted a month ago

Not Disclosed

Salary

6-10 Years

Experience

Bengaluru, Karnataka

Location

Job Description

We are looking for a senior, hands-on Backend Engineer to build and operate the core AI/ML-backed systems that power BharatIQ’s consumer-facing products at scale.

This is not a role to learn ML fundamentals on the job. You are expected to already have experience shipping and operating ML-backed backend systems in production, such as RAG pipelines, search, ranking, recommendations, or AI assistants. The work requires making pragmatic tradeoffs across quality, latency, reliability, and cost in real-world systems.

Key Responsibilities

  • Build and operate core backend services for AI product runtime: orchestration, state/session, policy enforcement, tools/services integration
  • Implement retrieval + memory primitives end-to-end: chunking, embeddings generation, indexing, vector search, re-ranking, caching, freshness and deletion semantics
  • Productionize ML workflows and interfaces: feature/metadata services,online/offline parity, model integration contracts, and evaluation instrumentation
  • Drive performance and cost optimization (P50/P95 latency, throughput, cache hit rates, token/call cost, infra efficiency) with strong SLO ownership
  • Add observability-by-default: tracing, structured logs, metrics, guardrail signals, failure taxonomy, and reliable fallback paths
  • Collaborate with applied ML on model routing, prompt/tool schemas, evaluation datasets, and release safety gates.
  • Other Skills

  • 6–10 years building backend systems in production, including at least 2–3 years on ML/AI-backed products (search, recommendations, ranking, RAG, or assistants)
  • Practical ML chops: able to reason about embeddings, vector similarity, re-ranking, retrieval quality, evaluation metrics (precision/recall, nDCG, MRR), and data drift—without needing training
  • Experience implementing or operating RAG pipelines (document ingestion,chunking strategies, indexing, query understanding, hybrid retrieval, re-rankers)
  • Strong distributed systems fundamentals: API design, idempotency, concurrency, rate limiting, retries, circuit breakers, and multi-tenant reliability
  • Comfort with common ML/AI platform components: feature stores/metadata, streaming/batch pipelines, offline evaluation jobs, A/B measurement hooks
  • Ability to ship end-to-end independently: design → build → deploy → operate in a fast-moving environment
  • Nice to have

  • Agentic runtime / tool-calling patterns, function calling schemas, structured outputs, safety/guardrails in productio
  • Prior work with FAISS / Milvus / Pinecone / Elasticsearch hybrid retrieval, and model serving stacks
  • Kubernetes + observability stack depth (OpenTelemetry, Prometheus/Grafana, distributed tracing), plus privacy controls for user data
  • Educational Qualifications

  • Bachelor’s or Master’s in Computer Science, Data Science, or related fields.
  • Equivalent practical experience building and operating large-scale backend and ML-backed systems will be considered in place of formal education.
  • Advanced certifications or research exposure in AI/ML/DL is an added advantage.
  • Rounds description


    With our Recruiting AI.
    Get hired quickly and reliably.
    Set up your profile | Add skills | Take automated interviews.

    Sign in

    2020 © All rights reserved. GeekyAnts India Pvt Ltd