Research Scientist, Evaluations, Security and Privacy, DeepMind

Google

Mountain View, CA

JOB DETAILS
JOB TYPE
Full-time, Employee
LOCATION
Mountain View, CA
POSTED
1 day ago
Applicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Mountain View, CA, USA; San Francisco, CA, USA.

Minimum qualifications:

  • PhD degree in Computer Science, a related field, or equivalent practical experience.
  • 4 years of experience with research agendas across multiple teams or projects.
  • 3 years of experience designing and implementing benchmarking frameworks for machine learning models.
  • 2 years of experience in security and privacy.
  • One or more scientific publication submissions for conferences, journals, or public repositories (such as CVPR, ICCV, NeurIPS, ICML, ICLR, etc.).

Preferred qualifications:

  • 3 years of experience in software development or engineering.
  • 2 years of experience coding in C++ and Python.
  • Passion for AI technology and all of its possibilities.

About the job

As an organization, Google maintains a portfolio of research projects driven by fundamental research, new product innovation, product contribution and infrastructure goals, while providing individuals and teams the freedom to emphasize specific types of work. As a Research Scientist, you'll setup large-scale tests and deploy promising ideas quickly and broadly, managing deadlines and deliverables while applying the latest theories to develop new and improved products, processes, or technologies. From creating experiments and prototyping implementations to designing new architectures, our research scientists work on real-world problems that span the breadth of computer science, such as machine (and deep) learning, data mining, natural language processing, hardware and software performance analysis, improving compilers for mobile platforms, as well as core search and much more.

As a Research Scientist, you'll also actively contribute to the wider research community by sharing and publishing your findings, with ideas inspired by internal projects as well as from collaborations with research programs at partner universities and technical institutes all over the world.

The mission of the team is to develop solutions to address contextual security and privacy challenges in Gemini as well as agentic products from Google. This is a research team that looks ahead at the upcoming security challenges, and solves them, going all the way to landing in Gemini or Google products.

Artificial intelligence will be one of humanity’s most transformative inventions. At DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.

We are pushing the boundaries across multiple domains. Our global teams offer learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.
The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Drive research to safeguard Gemini’s flagship foundational models and agentic products against emerging vulnerabilities at a massive scale.
  • Design, prototype, and evaluate novel defense mechanisms to protect models and agents from adversarial attacks, prompt injections, and contextual security threats.
  • Translate theoretical research breakthroughs into practical, real-world security solutions for both training and inference pipelines.
  • Work closely with core modeling, engineering, and Trust and Safety teams to seamlessly integrate security innovations into Gemini's infrastructure.
  • Stay ahead of the threat landscape by inventing next-generation security techniques specifically designed for autonomous and agentic AI systems.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

About the Company

G

Google