Role: Big Data Developer-Automation
Location: Austin, TX-Day one onsite
Duration: 6+ Months
Job Description
Experience ( 9+ years’ experience)
Responsibilities
Please share candidates who have a background in EDA workflows.
People with experience in the semiconductor industry or EDA software companies.
looking on strong python automation with good understanding on DevOps practices.
- Design, develop and implement software solutions and automation workflows for IT infrastructure and operations
- Based upon requirements, independently design and develop best-practice code that enable integration and administration of diverse applications with a focus on process automation
- Deploy production-ready code using industry-standard testing and deployment processes, such as unit/acceptance tests, testing environments and CI/CD processes
- Leverage web frameworks (such as Flask or Django) to provide full-featured RESTful APIs that can be used by end-users and other applications
- Document and support automation services – and continue to build on that automation framework; make it responsive, make it self-healing and incorporate AI and make it purely data driven
- Improving and refining server OS deployments and provisioning processes using automation
- Design, implement and support of IT applications, such as CMDB, ELK stack, GraphQL and other tools such as Grafana, InfluxDB, MariaDB, PostgreSQL
- Contribute to configuration management policy development to further automate and streamline operations
- Contribute to open source tool builds used on Linux servers
- Hands-on application debugging, troubleshooting, and problem remediation with automation workflows
- Build tools and implement automated flow to integrate seamlessly into our job scheduling system, including regression and continuous integration systems
- Work with other cross-functional teams to automate workflows, implement dashboards and monitoring systems and support engineering groups
- Develop containerized applications using tools such as OpenShift/Kubernetes/Podman/Docker and champion adoption of microservices and containerization best practices for automating IT processes
- Closely collaborate with IT and internal customer teams to understand requirements and develop new tools/applications
- Continuously evaluate and improve best practices for IT process automation
- Responsible for contributing and fully focused on executing on project plans defined as by SARC IT team
Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 7+ years of experience in IT automation engineering
- Significant experience designing and developing enterprise-scale software solutions working wtith customers and stakeholder requirements
- Strong proficiency in one or more programming languages such as Python, Ruby, or Java
- Experience with configuration management tools such as Ansible, Puppet, or CFEngine
- Significant experience with development best practices: source control, pull requests, code reviews
- Experience with producing production-ready code using testing and deployment best-practices
- Experience working with and developing APIs (Flask, Django, Sanic, or similar)
- Experience with CI/CD tools and software testing frameworks
- Knowledge of building, configuring, monitoring, and supporting open source tools and application stacks on Linux
- Commitment to continually improving services and automated processes to meet the needs of customers and colleagues
- Experience with most Linux operating system commands and utilities
- Strong problem-solving and analytical skills
- Excellent communication and collaboration skills.
- Self-starter with the ability to work independently and as part of a team
Nice To Have Skills
- Understanding of infrastructure components – storage, compute network, licenses, version control system and basic system administration skills
- Knowledge of server provisioning with tools such as Redhat Satellite Server, Foreman, PXE, Kickstart
- Knowledge of building, configuring, and administering Linux computer systems in an environment with hundreds or thousands of clients
- Knowledge of Jira, Jira Project Management, Confluence, BitBucket
- Familiarity with Virtualization environments and tools such as VMWare, vCenter, vCenter Orchestrator
- ELK stack (Elastic Search, Logstash, Kabana), Splunk
- Knowledge of internet protocols and services including TCP, UDP, DNS, DHCP, HTTP, SSH, LDAP & AD
- Cloud experience (AWS/Google Cloud/Azure)
- Experience with containers / Kubernetes / OpenShift
- Knowledge of EDA work flows
"#L!-CEIPAL"