Data Platform Lead

Azimuth GRC

Azimuth GRC

Administration

Jacksonville, FL, USA

Posted on Apr 14, 2026

About us:

At Azimuth, we’re building technology that eliminates blind spots, scales effortlessly, and sets new standards across industries. From our hub in Jacksonville, FL, our platform transforms millions of data points into actionable insights every day, automating what others still do manually and proving that speed and accuracy can coexist. But Azimuth is more than the platform we’ve built; it’s the people behind it. We’re a high-growth team of innovators, engineers, strategists, and problem-solvers united by a drive to challenge outdated systems and replace them with something better. We thrive on bold thinking, collaboration, and the satisfaction of seeing vision turn into impact at scale. Here, you’ll do more than just contribute; you’ll shape the future. Every idea has a seat at the table, every contribution moves industries forward, and every person has the chance to do the best work of their career. If you’re looking for a place where innovation is expected, speed is celebrated, and your work truly matters, Azimuth is where it all happens.

Data Platform Lead

Responsibilities

Data Platform Architecture & Engineering

  • Own the end-to-end data platform architecture on Azure Databricks, including data ingestion, transformation, modeling, and serving layers following the Medallion architecture (Bronze, Silver, Gold).
  • Design, build, and maintain scalable, reliable Delta Lake pipelines using PySpark and Python.
  • Manage schema evolution, backward compatibility, and long-term data model stability as product requirements evolve.
  • Own the enterprise data model supporting the company’s proprietary compliance and regulatory test execution platform.
  • Define and maintain data contracts between the data platform and downstream consumers including BI/reporting, APIs, and Gen AI applications.

Data Governance, Quality, and Compliance

  • Implement and manage data governance using Unity Catalog including access controls, column-level security, data lineage, and auditability.
  • Define and enforce data quality standards including validation rules, reconciliation checks, exception monitoring, and audit-ready data processes.
  • Ensure data lineage, traceability, and immutable logging practices align with regulatory and audit requirements.
  • Partner with compliance and product teams to ensure data structures support regulatory reporting and compliance workflows.

AI / Gen AI Data Enablement

  • Prepare clean, structured, and vectorized datasets for LLM and Gen AI use cases including embedding pipelines, retrieval-augmented generation (RAG) data feeds, and feature stores.
  • Partner with the AI Architect to define data requirements for AI and machine learning features.
  • Ensure curated datasets can support both BI/reporting and AI/ML use cases from a shared governed data layer.

Team Leadership & Delivery

  • Mentor data engineers, perform architecture and code reviews, and establish engineering standards and best practices.
  • Establish a strong pull-request, testing, and documentation culture across the data engineering team.
  • Assign ownership across workstreams, track delivery milestones, and proactively unblock dependencies.
  • Own incident response for data pipeline failures including triage, root cause analysis, and postmortem documentation.
  • Collaborate closely with BI/Reporting teams to deliver Gold-layer datasets optimized for analytics and reporting consumption.

Qualifications / Requirements

  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related quantitative field.
  • 8+ years of experience in data engineering, with at least 3 years in a lead role or senior individual contributor capacity.
  • Expert-level experience with Azure Databricks including Delta Lake, Structured Streaming, Delta Live Tables, Databricks Workflows, and Unity Catalog.
  • Strong programming skills in Python and PySpark with experience writing production-grade, well-tested, and peer-reviewed data pipelines.
  • Advanced SQL skills including window functions, CTEs, query optimization, execution plan analysis, and performance tuning at scale.
  • Strong data modeling experience including star/snowflake schemas, slowly changing dimensions (SCD), and fact/dimension modeling for reporting and analytics.
  • Strong fundamentals in data structures, algorithms, object-oriented programming, automated testing, and performance optimization.
  • Hands-on experience with Azure data ecosystem services such as ADLS Gen2, Azure Data Factory, Azure Key Vault, Azure SQL, and Event Hubs.
  • Experience with infrastructure-as-code for data platform resources such as Terraform or Databricks Asset Bundles.
  • Working knowledge of Gen AI and LLM data architectures including vector databases, embedding pipelines, and retrieval-augmented generation (RAG).
  • Experience delivering curated datasets that serve both BI/reporting tools and machine learning/AI consumers from a shared data platform.
  • Familiarity with financial regulatory data frameworks such as HMDA, CECL, CRA, or similar is strongly preferred.