Chargebee is a subscription management platform, used by fast-growth businesses like FreshWorks, Okta, and Envoy to manage and automate their recurring billing and lifecycle operations.
What can you expect from the role?
Chargebee is looking to leverage the full capability of its data assets to create value for its customers for its next phase of hyper-growth. Chargebee is looking for a hands-on technical leader for architecting, designing, and developing technical solutions to collect, manage, store and visualize data from a wide range of sources in scalable , performant and highly available manner. As an Architect, you will be responsible for all products shipped out by the engineering team including selection of technologies, architecture, development, QA, dev-ops and their performance. In this role, You will play a key role in defining, designing, and building predictive analysis using AI/ML modules to leverage these datasets effectively.
What do we expect?
Overall 10-15+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems.
Ability to work with a multi-technology/cross-functional team to guide/manage a full life-cycle of a Big Data solution.
Strong understanding of Agile and DevOps practices across SDLC
Track record of building and scaling a SaaS product
Excellent problem solving and programming skills; proven technical leadership and communication skills.
Ability to learn new tools and paradigms in data engineering and science .Intellectually curious and continuously striving to learn.
Experience in creating large scale data engineering models and pipelines, data-based decision-making and quantitative analysis.
Hands on experience in ETL/ELT data pipeline design, source-target mapping, developing and performance tuning in Big Data ecosystem.
Experience architecting and designing data products in Streaming, Server-less and Micro-services/API based loosely coupled Architecture and platform.
Experience of any industry standard ETL/workflow Tools such as Mulesoft/Infa/Talend/airflow and BI visualisation tools (Tableau, Looker others)
Experience in Machine Learning and Deep Learning toolkits such as Spark ML/MLib, TensorFlow, PyTorch,scipy, numpy, pandas, or equivalent;
Advanced experience in Big Data technologies: Kafka, Apache Spark, Hadoop
Experience of Data warehousing tools like DynamoDB, SQL, Amazon Redshift, and Snowflake
Exposure in building global scale cloud-native systems and modern tech stack: AWS, Java, Spring Framework, RESTful API, and container-based application.
Experience in building Predictive models through all phases of development, from design through training, evaluation, validation, and implementation.
apply with your résumé to get
the conversation started