PLATFORM
VirtueGuardVirtueRedVirtueAgent
FinanceTech/ITRetailInsuranceHealthcare
ResearchBlogCompany
Request A Demo

Research Insights

From academic research to production-grade AI safety.

HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding.

July 29, 2024

Differentially Private Synthetic Data via Foundation Model APIs 2:

July 29, 2024

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal.

July 29, 2024

Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing.

July 29, 2024

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression.

July 29, 2024

Rob-FCP: Certifiably Byzantine-Robust Federated Conformal Prediction.

July 29, 2024

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content.

July 29, 2024

C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models

July 29, 2024

Fine-tuning aligned language models compromises safety, even when users do not intend to!

July 29, 2024
Previous

AI. Security. Compliance.

Product
VirtueRedVirtueGuardVirtueAgent
Resources
BlogsResearchTerms and ConditionPrivacy Policy
Company
AboutCareers

All system normal