Straive is a market leading Content and Data Technology company providing data services, subject matter expertise, & technology solutions to multiple domains.
Data Analytics & Al Solutions, Data Al Powered Operations and Education & Learning form the core pillars of the company’s long-term vision. The company is a specialized solutions provider to business information providers in finance, insurance, legal, real estate, life sciences and logistics. Straive continues to be the leading content services provider to research and education publishers.
Data Analytics & Al Services: Our Data Solutions business has become critical to our client's success. We use technology and Al with human experts-in loop to create data assets that our clients use to power their data products and their end customers' workflows. As our clients expect us to become their future-fit Analytics and Al partner, they look to us for help in building data analytics and Al enterprise capabilities for them.
With a client-base scoping 30 countries worldwide, Straive’s multi-geographical resource pool is strategically located in eight countries - India, Philippines, USA, Nicaragua, Vietnam, United Kingdom, and the company headquarters in Singapore.
Website: https://www.straive.com/ Linkedin
Job Title: Data Scientist with Data Bricks
Type: C2C
Location: Santa Clara, CA (5 days onsite)
Job Description:
- B.E./ B. Tech / M. Tech/ MCA in computer science, artificial intelligence, or a related field.
- 6+ years of IT experience with a min of 3+ years in Data Science (AI/ML).
- Strong programming skills in Python.
- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch).
- Hands-on AI/ML modeling experience of complex datasets combined with a strong understanding of the theoretical foundations of AI/ML(Research Oriented).
- Expertise in most of the following areas: supervised & unsupervised learning, deep learning, reinforcement learning, federated learning, time series forecasting, Bayesian statistics, and optimization.
- Hands-on experience on design, and optimizing LLM, natural language processing (NLP) systems, frameworks, and tools.
- Building RAG applications independently using available open source LLM models.
- Comfortable working in the cloud and high-performance computing environments (e.g., AWS/Azure/GCP, Databricks).