Kafka Database Engineer
Verifone
Why Verifone
For more than 30 years, Verifone has established a remarkable record of leadership in the electronic payment technology industry. Verifone is one of the leading electronic payment solutions brands and among the largest providers of electronic payment systems worldwide.
Verifone has a diverse, dynamic, and fast-paced work environment in which employees are focused on results and have opportunities to excel. We take pride in working with leading retailers, merchants, banks, and third-party partners to invent and deliver innovative payment solutions around the world. We strive for excellence in our products and services and are obsessed with customer happiness.
Across the globe, Verifone employees are leading the payments industry through experience, innovation, and an ambitious spirit. Whether it’s developing the next generation of secure payment systems or finding new ways to bring electronic payments to emerging markets, the Verifone team is dedicated to the success of our customers, partners, and investors. It is this passion for innovation that drives every Verifone employee toward personal and professional success.
What's Exciting About the Role
Verifone is seeking a Database Engineer to join our incredible Platform Engineering team. This is an early career role with a focus on Kafka, where you’ll be hands-on with day-to-day operations, reliability, tuning, automation, high availability for payment gateway solutions that process billions of transactions annually on prem and in AWS Cloud. You will have the opportunity to leverage your experience, and learn other technologies such as Redis, MongoDB, PostgreSQL, MySQL, Snowflake, etc.
Key Responsibilities
- Operate and support Kafka clusters in MSK, and on-prem physical/virtual platforms: topic lifecycle, partitions/replication, client connectivity, ACLs, quotas, and upgrades.
- Troubleshoot producer/consumer issues (lag, rebalancing, throughput, serialization, retries, ordering).
- Manage Kafka Connect ecosystem, Debezium, KSQL, Schema Registry
- High availability, replication, backup and recovery strategies
- Kafka version upgrades in dev and production environments with a zero or very minimum application down time
- Support schema/contract patterns (e.g., schema registry concepts, compatibility expectations, versioning).
- Work with engineering teams to understand the requirement and guide them to best practices and optimize the queries to get the better performance
- Monitor cluster health and performance; tune for throughput/latency and manage capacity.
- Help with incident response and postmortems: identify root cause, implement prevention.
- Support and improve centralized logging and search for operational troubleshooting and incident response with ELK stack (Elasticsearch, Logstash, Kibana)
- Build automation for provisioning, configuration, and routine maintenance (IaC and scripting).
- Implement and improve monitoring/alerting and dashboards (SLIs/SLOs).
- Participate in on-call rotation (with escalation and mentoring) and support production systems.
- Document systems and operational procedures clearly so others can run what you build.
Required Qualifications/Skills
- 2+ years of hands-on experience supporting Kafka in a large scale production environment
- Kafka Producer/Consumer Microservices concepts and Kafka distributed Architecture.
- Solid Linux fundamentals: networking basics, logs, system troubleshooting, process/memory, disk.
- Comfort with scripting and automation (e.g., Python, Bash).
- Infrastructure-as-Code (Terraform preferred) and CI/CD familiarity.
- Familiarity with observability tools (metrics/logs/tracing concepts) and incident response practices.
- Basic understanding of distributed systems tradeoffs (availability, consistency, partitions, backpressure).
- Strong communication and presentation skills with emphasis on executive communication
- Flexible with regards to working shifts; on-call & weekends
Preferred Skills (Not Mandatory)
- Operate Redis deployments for caching, ephemeral state, queues/streams, and rate limiting use cases.
- Relational DB experience: PostgreSQL and/or MySQL (indexing basics, vacuum/analyze, query plans, replication fundamentals).
- MongoDB operational familiarity (replica sets, elections, oplog basics, backup/restore).
- Troubleshoot client behavior, keyspace growth, hot keys, and performance regressions.
- Improve reliability patterns: backups/snapshots (where applicable), failover readiness, and runbooks.
- Experience working with PCI (Payment Card Industry Data Security) standards
- Exposure to data analytics, data processing, ETL, Data lake (batch vs streaming, file formats like Parquet, table formats like Iceberg/Delta/Hudi, basic orchestration), AWS tools (Athena, Glue, Iceberg, Redshift, etc).
- On-prem experience (VMware/KVM, storage, networking) and/or AWS experience (EC2, VPC, IAM, MSK/ElastiCache, CloudWatch).
- Container/Kubernetes familiarity (deployments, stateful workloads, storage classes) is a plus.
- Security fundamentals: least privilege, secrets management, encryption-in-transit/at-rest concepts.
Our Commitment
Verifone is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Verifone is also committed to compliance with all fair employment practices regarding citizenship and immigration status.