OPEN POSITION

Senior AI Platform Engineer

We are now expanding our team and are looking for skilled, goal-oriented backend developers to join our platform and custom solution teams.

Apply now

Our stack

Python 3.12+ and NodeJS as our core languages – Python for rapid development, NodeJS for telephony integrations and high-performance components

Cloud-native infrastructure: Kubernetes orchestration with Helm, multi-cloud deployment across AWS

GCP, and AzuregRPC, REST APIs, WebSockets and Kafka for service communication

GitHub Actions for CI/CD, Docker for containerization, with Grafana, Prometheus, and OpenTelemetry for observability

PostgreSQL, ClickHouse for analytics, and specialized vector databases for AI workloads.

Responsibilities

Architect high-performance real-time systems: Design micro-services and streaming architectures that handle millions of concurrent voice/text AI interactions with ultra-low latency

Create foundational platform components: Build reusable infrastructure blocks (rate limiters, service mesh, circuit breakers), developer tools (APIs, SDKs), and observability solutions that enable teams to ship AI features rapidly

Optimize AI infrastructure end-to-end: Manage GPU clusters, design model serving pipelines, implement vector similarity search, and collaborate with ML engineers to ensure efficient model deployment and inference

Enable engineering excellence: Establish platform standards, deployment patterns, and self-service automation that empower product teams to move fast while maintaining reliability.

Requirements

Pragmatic engineering approach: Balance speed and quality by shipping working solutions quickly when needed, while keeping code clear and maintainable over clever abstractions

Strong programming skills in Python (expert level with type systems and modern ecosystem) and NodeJS (for performance-critical components)

Production-scale distributed systems experience: Building and operating micro-services architectures with service discovery, API gateways, event-driven patterns, and proven ability to handle large-scale production workloads

Real-time systems and observability expertise: Experience with streaming architectures, message queues, low-latency optimization, plus deep knowledge of distributed tracing, metrics aggregation, and log analysis at scale

Strategic thinking: For critical architectural decisions, you invest time to evaluate multiple approaches, present top candidates with trade-offs, and document why alternatives were rejected

Outstanding technical communication: Ability to clearly explain complex architectural decisions, write comprehensive documentation, facilitate productive technical discussions, and effectively communicate with both engineering teams and stakeholders.

What we offer?

High-impact projects with prominent clients and our proprietary AI platform development

Cutting-edge technology stack with the latest tools in AI infrastructure and platform engineering

Budgets for the latest AI models (GPT-4, Claude, etc.) to ensure our team has the best tools available

Rapid career progression working alongside senior professionals from leading tech companies

Remote work flexibility from Europe / US

Direct influence on platform architecture decisions that shape our AI products.

Our stack

Python 3.12+ and NodeJS as our core languages – Python for rapid development, NodeJS for telephony integrations and high-performance components

Cloud-native infrastructure: Kubernetes orchestration with Helm, multi-cloud deployment across AWS

GCP, and AzuregRPC, REST APIs, WebSockets and Kafka for service communication

GitHub Actions for CI/CD, Docker for containerization, with Grafana, Prometheus, and OpenTelemetry for observability

PostgreSQL, ClickHouse for analytics, and specialized vector databases for AI workloads.

Responsibilities

Architect high-performance real-time systems: Design micro-services and streaming architectures that handle millions of concurrent voice/text AI interactions with ultra-low latency

Create foundational platform components: Build reusable infrastructure blocks (rate limiters, service mesh, circuit breakers), developer tools (APIs, SDKs), and observability solutions that enable teams to ship AI features rapidly

Optimize AI infrastructure end-to-end: Manage GPU clusters, design model serving pipelines, implement vector similarity search, and collaborate with ML engineers to ensure efficient model deployment and inference

Enable engineering excellence: Establish platform standards, deployment patterns, and self-service automation that empower product teams to move fast while maintaining reliability.

Requirements

Pragmatic engineering approach: Balance speed and quality by shipping working solutions quickly when needed, while keeping code clear and maintainable over clever abstractions

Strong programming skills in Python (expert level with type systems and modern ecosystem) and NodeJS (for performance-critical components)

Production-scale distributed systems experience: Building and operating micro-services architectures with service discovery, API gateways, event-driven patterns, and proven ability to handle large-scale production workloads

Real-time systems and observability expertise: Experience with streaming architectures, message queues, low-latency optimization, plus deep knowledge of distributed tracing, metrics aggregation, and log analysis at scale

Strategic thinking: For critical architectural decisions, you invest time to evaluate multiple approaches, present top candidates with trade-offs, and document why alternatives were rejected

Outstanding technical communication: Ability to clearly explain complex architectural decisions, write comprehensive documentation, facilitate productive technical discussions, and effectively communicate with both engineering teams and stakeholders.

What we offer?

High-impact projects with prominent clients and our proprietary AI platform development

Cutting-edge technology stack with the latest tools in AI infrastructure and platform engineering

Budgets for the latest AI models (GPT-4, Claude, etc.) to ensure our team has the best tools available

Rapid career progression working alongside senior professionals from leading tech companies

Remote work flexibility from Europe / US

Direct influence on platform architecture decisions that shape our AI products.

© 2022—2024 Aiphoria

UK, Cyprus, Portugal, UAE