We speak…

GDPR

&

EU AI Act 


ABOUT

In the race for generative AI market domination and chatgpt API monetization, artificial intelligence/ machine learning systems are being deployed every second, of every day.

Many businesses and organizations don’t have the luxury of time or dedicated resources to navigate AI risks and keep up with rapidly changing regulation.

AI-ML technology innovates at light speed. The snail pace, legacy bloat and strict annual contracts of traditional advisory firms no longer makes sense when transformation is fast tracked.

Which is why we offer “just in time” pre-and post AI audit advisory & governance services. What you need today will inevitably change tomorrow.


SERVICES

  • GDPR

    EU AI Act

    NYC Bias Law

    CCPA, VCDP, CDPA, UCPA

    Children’s Code

  • Accountability

    Explainability

    Fairness

    Inclusiveness

    Privacy

    Reliability & Robustness

    Safety & Security

    Transparency

  • Proactive Not Reactive

    Preventative Not Remedial

    Privacy as the Default Setting

    Privacy Embedded Into Design

    Visibility and Transparency

    End-to-End Security

    Respect for User Privacy

  • Designing user-first controls to remind people of human agency when interacting with automated systems including data, feature and global controls. Building trust and transparency for ML/AI generated outputs.

EVENTS

  • WMF- International Tech & Digital Innovation Festival

    Keynote Speaker, “Can Generative AI like ChatGPT Be Governed?”

    AI for Government Stage

    Italy, June 2023

Perspectives

Honest Lying: Why Scaling Generative AI Responsibly is Not a Technology Dilemma in as Much as a People Problem

In the 24/7 news cycles of generative AI hype and fears of widespread job loss, there’s no attention being paid to herculean hiring effort and human expertise necessary to scale generative AI responsibly.

From Black Box to Glass Box: Is AI Transparency Still Possible?

With the rise of OSS security concerns, divergence of explainability goals and custom, proprietary XAI algorithms… is AI transparency still possible?