What are the responsibilities and job description for the Data Architect || Onsite in Phoenix, AZ || W2 || Need Local to AZ position at Value Spectrum Technologies LLC?
Role : Enterprise Data Architect
Location : Phoenix, AZ
Experience : 12
Role Summary
The Enterprise Data Architect is a hands-on enterprise technologist with enterprise-wide visibility and accountability for data architecture across the organization. This role goes beyond strategy and governance—requiring the ability to design, build, prototype, and operationalize architectures using code.
The architect is expected to demonstrate solutions, codify architectural standards as reusable assets, and work directly with engineering teams to ensure architectures are implemented, observable, secure, and scalable in practice, not just in theory.
Key Responsibilities
1. Enterprise Data Strategy & Architecture (with Execution Ownership)
Define and continuously evolve the enterprise data architecture blueprint, but crucially express it as Architecture-as-Code (reference implementations, templates, IaC, CI/CD patterns).
Own enterprise-wide visibility into data platforms, data products, pipelines, and usage patterns across domains.
Translate enterprise data strategy into working, deployable solutions, not static documentation.
Establish data product standards including domain ownership, contracts, schemas, and SLAs, validated through real implementations.
Lead data platform modernization by personally contributing to design reviews, PoCs, and production-grade builds.
2. Hands-on Data Platforms & Engineering
Architect and implement scalable data platforms using AWS, EMR, Kafka, Snowflake, Databricks, Iceberg, and lakehouse patterns.
Personally build and review reference pipelines for batch, streaming, real-time, and event-driven use cases.
Define and implement data modeling, metadata, lineage, and data quality frameworks using code-first approaches.
Create reusable enterprise accelerators (templates, libraries, patterns) that teams can adopt.
3. AI, ML & GenAI Enablement (Practical, Not Conceptual)
Partner with AI and ML teams to hands-on enable feature stores, training pipelines, vector databases, and GenAI workflows.
Define and implement reference architectures for LLM integration, prompt orchestration, and retrieval-augmented generation (RAG).
Ensure AI-readiness through automated lineage, observability, and governance controls, embedded into pipelines.
4. Governance, Security & Compliance Embedded in Code
Operationalize governance by embedding controls into pipelines, platforms, and CI/CD, not manual reviews.
Implement security, access controls, encryption, and privacy-by-design directly in infrastructure and data workflows.
Ensure regulatory compliance is provable through automation, telemetry, and audit artifacts.
5. Leadership Through Doing
Act as a hands-on mentor who codes alongside teams when needed to unblock delivery.
Serve as an enterprise-wide advisor, with the credibility earned through demonstrated implementations.
Continuously evaluate and test emerging technologies before recommending enterprise adoption.
Required Skills & Experience
10 years in data architecture and data engineering roles with hands-on delivery experience.
Proven ability to code, build, and productionize enterprise data solutions.
Deep expertise in AWS, Snowflake, Spark, Kafka, EMR, Iceberg, and lakehouse architectures.
Strong experience across:
RDBMS, NoSQL, graph (Neo4j), vector databases, search platforms.
Streaming platforms (Kafka, Kinesis).
Observability tools (Prometheus, Grafana, Datadog, Splunk, CloudWatch).
Experience with data mesh and data products, implemented in practice.
Strong executive communication skills grounded in real system ownership.
Salary : $60 - $65