Mastering the Architecture of a Prompt
Farhan Aziz introduced foundational frameworks to ensure AI outputs move from generic to high-value. Quality in generative AI is directly correlated to the structure of the input, with the CAP (Context, Action, Persona) and DIG (Describe, Introspect, Goal) frameworks serving as the primary standards for legal administrators.
- Structure with CAP: Every prompt should include Context (audience and situation), a specific Action (draft, summarize or compare) and a defined Persona (thinking as a coach, consultant or skeptic).
- Verify Data with DIG: Before analyzing complex datasets, force the model to Describe the columns and values, Introspect on what the data cannot answer and clarify the ultimate Goal.
- Utilize “Sparring Partners”: Use AI to prepare for difficult stakeholder conversations by explicitly instructing it not to agree with you, forcing it to challenge your arguments and identify weak spots.
- Ground with Source Truth: To prevent “hallucinations,” provide the model with actual source documents — such as vendor agreements or internal policies — to act as its specific source of truth.
The Financial Realities of AI Integration
The transition to AI-driven efficiency creates a fundamental tension with traditional hourly billing, Ben Schorr and Stephanie Everett explained. When document analysis that once took 10 hours is reduced to 30 minutes, firms must re-evaluate their business models and consider alternative fee arrangements to capture the value of their technology.
- Audit Active Licenses: Unused Copilot licenses represent direct financial waste; firms should use Microsoft’s Viva Insights to track the ratio of paid licenses to active users, aiming for 100% utilization.
- Recognize Three-Dimensional ROI: ROI should be measured through efficiency (time reduction), financial impact (based on the billing model) and strategic value (client satisfaction and talent recruitment).
- Clean Data for Accuracy: LLMs accurately extract quantitative data from PDFs only 37% of the time. For financial or numerical analysis, administrators should extract data to clean CSV or Excel files first.
- Address the “Light Switch” Effect: AI does not unlock data; it illuminates it. Proper data hygiene and permissions are essential, as AI makes it easier for employees to find sensitive information they already had technical access to.
Governance and Policy Management
If a firm does not have a written AI policy, its de facto policy is “do whatever you want” — a dangerous stance in the legal industry. Governance sessions emphasized the need for clear, documented standards to manage “shadow AI,” where employees use unauthorized tools without guidance.
- Monitor with Viva Insights: Track adoption trends by manager or practice area to identify high-performing teams and target training where utilization is low.
- Maintain Performance with Resets: As chat sessions grow long, AI performance can drift. Administrators should reset or summarize long sessions when the model begins to lose context or blend instructions.
- Leverage Workspace Projects: Instead of re-explaining context in every chat, use project features to upload source documents once, allowing multiple related chat sessions to reference the same data.
- Commit to Change Management: AI adoption is not a one-time event; it requires ongoing support through champions committees and regular success-story sharing to prevent users from abandoning the tools after a mediocre first experience.
Through structured prompting, rigorous data cleaning and proactive license management and governance, law firms can bridge the gap between AI potential and billable reality.