Generative AI Security Framework | Slides

ATTACK SURACE

ATTACK SURACE

WHAT TO SECURE

WHAT TO SECURE

ACTION PLAN

ACTION PLAN


Generative AI Security Framework




Verify LLM and AI Assistant Answers

If you are using AI Assistant, you should cross check facts/number given by AI Assistant

Check in Vecor DB: If you are using Vector DB/RAG, you can check what value RAG provide. This will help to ensure that response generated by RAG is in line with value stored in vector DB.
Use Second LLM: Other/aditional approach is you can ask a smaller question from second or same LLM and se what answer you get e.g. if there is 1 page of text and it says company Dataknobs has revenue of $78M, you can ask a smaller question "how much revneue Dataknobs has". However you need to consider additional cost of 2nd call? You may have more than one fact and multiple calls may be needed for each fact.
Call to Search Engine: You can run a query on search engine programmatically and chec response. However depending on domain this result may or may not work. It may require parsing result from search engine.

Governence Security and Compliance

Gen AI Attack Surface

How Dataknobs identify GenAI attack surface

  • Data Poison : Malacious actor can temper with training data to corrupt AI and Gen AI models.
  • Prompt Injection : Attackers craft special instruction disguisd as regular prompts.
  • Data Source Attack: Attacker can attack and hack data sources.
  • Attack on Model: Attacker can analyze input , output pair to train a surrogate model.
  • What to Secure - GenAI

    How Dataknobs protects

  • Secure e2e infra
  • Secure the prompts, validate input.
  • Secure APIs to ensure attacker does not get to your data
  • Apply Modration in prompts
  • Secure your model - regularly check for data poison, model behavior change
  • Action Plan for Gen AI Security

    Build Action Plan with us for GenAI safety

  • Establish AI governance
  • Validate Infra Security
  • Validate Data Security
  • Validate Model Security
  • Check Prompt, Input, LLM Usage Security
  • Develop data products with KREATE, KONTROLS and KNOBS

    Innovate with responsibility

  • KREATE - Create What matters - data,content, User interface, AI Assistant & Application
  • .
  • KONTROLS - Focus on guardrails, safety, security, governance, lineage. Add right controls in creation process
  • KNOBS - Enable Diagnostic and Experimentation in creation and appying controls.
  • Why Kontrols matters

    Control GenAI and AI output

  • GenAI create new trajectories of data. It may produce unwanted output.
  • Apply controls to check facts and avoid producing incorrect answers.
  • Apply controls to produce output that is natural.
  • Produce responses as per law and governace policies.
  • Why Knobs matter

    Knobs are levers using which you manage output

    See Drivetrain appproach for building data product, AI product. It has 4 steps and levers are key to success. Knobs are abstract mechanism on input that you can control.

    PRODUCTS

    KREATE

  • Generate Datasets and Text Content, Images, Slides
  • Generate Websites and User interface
  • Set up AI Assistants
  • KONTROLS

  • Data Lineage : Prompt to content geenration to version to usage
  • Input Filtering
  • Output Validation
  • Structure and Type Enforcements
  • KNOBS

  • Experiment with Prompts
  • Try different attributes fir Personalization
  • Experiment with RAG approaches
  • Compare different LLMs



  • From the Slides blog

    LLM Overview
    LLM Overview

    Kreatebots - Build Smart AI Assistant

    Boost team productivity by automating repetitive tasks with AI assistants created using Kreatebots - a no code bot builder platform.

    Spotlight

    Futuristic interfaces

    Future-proof interfaces: Build unified web-chatbot experiences that anticipate user needs and offer effortless task completion.




    Action-plan    Attack-surace    Security-governance-framework    What-to-secure