Skip to content

Technical Areas

USC will contribute specifically in three crucial technical areas of expertise:

  • The emerging area of AI and engineering safety.  Safety engineering is a mature discipline that has provided a strong methodology for safety-critical systems for many decades.  USC researchers in this area have provided expertise in safety engineering for aviation, nuclear technologies, and health among other areas.  In particular, the USC Aviation Safety & Security Program has trained thousands of military and commercial pilots as well as personnel in industry and government for over 70 years, and now includes courses on AI safety.  USC experts have been involved in safety standards as well as safety evaluations for various federal agencies.  USC research involves methodologies to help meet safety standards, as well as standards such as ISO 26262 and DO-178B/C and the IEEE P2846 draft standard.  Additional work in this area includes the USC Airbus Institute for Engineering Research (AIER) and the USC Pratt & Whitney Institute (PWICE).  The AI Safety Institute’s efforts on risk and uncertainty analysis, safe operations, and safety culture should be informed by the methodologies and lessons learned from safety engineering.  
  • The unique challenges and methodological requirements for privacy and security in AI systems.  USC’s DETERLab has pioneered security and privacy experimentation for over two decades.  It has served a broad research community, including over 1,000 research project teams from over 250 institutions in 46 countries. Some example projects included human behavior modeling in cybersecurity scenarios, defenses against extremely large-scale DDoS, worm and botnet attacks, encrypted traffic classification, and phishing deception.  DETERLab has also been used in education by over 200 classes in 150 institutions, helping educate more than 20,000 students.  USAISI consortium efforts on testbeds in AI and privacy and security could build on lessons learned from this testbed. Now with a new $18M award from the NSF Mid-Scale Research Infrastructure (MSRI) program, we are building a new research infrastructure – SPHERE: Security and Privacy Heterogeneous Environment for Reproducible Experimentation. SPHERE will be a research infrastructure for at-scale, realistic and reproducible cybersecurity and privacy experimentation across very diverse hardware. Significant research on federated learning at USC has been applied to enhance privacy and security for distributed biomedical data sources and other data.
  • Thwarting the spread of misinformation.  False or misleading news, harmful content, or inaccurate reports can have profound impacts on our increasingly digital society. USC researchers are at the forefront of addressing these challenges, with renowned pioneers that contributed among others to revealing political interference campaigns in the US and Europe, conspiracy theories, and public health misinformation during the COVID-19 pandemic. With over a decade of work at USC and funding totaling nearly $15M, USC researchers have focused on understanding and mitigating the spread and impact of misinformation. Identifying misinformation is a complex challenge, compounded by the nuances of context and intent. Standard fact-checking approaches struggle with the sheer volume and variety of information online. New generative AI techniques are already used to fabricate plausible media content. AI agents or bots are utilized to replace or empower humans in harmful misinformation distribution. USC researchers developed AI systems to identify patterns indicative of misinformation in large datasets. AI-driven frameworks include automated fact-checking tools and content moderation algorithms that can analyze and flag potentially misleading content. USC researchers collected massive datasets (about COVID-19, elections, etc.) from social media platforms, and reshared them publicly with the research community, which catalyzed hundreds of studies in the broader community. Looking ahead, the challenge of misinformation is expected to grow, and USC researchers are developing adaptive and robust AI solutions that are fair, protect user privacy, and balance information control with free speech.

USC brings extensive expertise in all areas of interest to the Consortium, including Data and data documentation, AI Metrology, AI Governance, AI Safety, Trustworthy AI, Responsible AI, AI system design and development, AI system deployment, AI Red Teaming, Human-AI Teaming and Interaction, Testing and Evaluation, Validation and Verification methodologies, Socio-technical methodologies, AI Fairness, AI Explainability and Interpretability, Workforce skills, Psychometrics, and Economic analysis among others.