Who this track is for: You use AI daily, sometimes for serious work. You may have deployed AI tools for your team or organization. You influence technology decisions. You understand prompting basics and have seen real ROI – but you want to understand the architecture, build production-grade systems, and make AI a structural advantage rather than a collection of useful tricks.
What you will be able to do by Day 7: Articulate the architectural foundations that govern LLM behavior; design and document a prompt library for organizational use; spec an agent workflow that automates a recurring high-value task; and embed verification as a structural pipeline component, not an afterthought.
7 days. 10 minutes a day. One goal: make AI feel like a tool you actually want to use – not a mystery, not a threat.
The Foundation
AI is a system that looks at massive amounts of data, finds patterns, and makes predictions. PATTERNS IN, PREDICTIONS OUT. The newer kind – GENERATIVE AI – creates rather than just predicts. The architecture doing that pattern recognition is the same architecture running in every LLM your team uses. Knowing what is under the hood changes how you deploy it.
Your Personalized Path – 3 Questions
Q1: How often do you currently use AI tools like ChatGPT, Claude, Gemini, or Grok?
A: Never or almost never B: Tried it once or twice C: A few times a week D: Daily
Q2: Have you ever used AI to complete an actual work task?
A: No B: Once or twice, results were mixed C: Yes, regularly
Q3: Do you influence or manage technology decisions at work?
A: No B: Somewhat C: Yes, I drive these decisions
If AI could save your organization 20 hours a week across your team, what would you redirect that capacity toward? Keep that answer in mind through Day 7.
Challenge: Go to quickdraw.withgoogle.com and draw 3 things. The architecture doing the guessing – pattern recognition across massive training data – is the same architecture running in every LLM you deploy.
The Transformer: Foundation of Every LLM You Deploy
Every major commercial LLM – Claude, ChatGPT, Gemini, Copilot – is built on the Transformer architecture. The core mechanism is self-attention: when a Transformer processes your prompt, it weighs every word against every other word simultaneously, assessing which parts of the input are most relevant to each part of the output.
This is why LLMs can maintain coherence across a long document, follow complex multi-part instructions, and understand context that appeared 2,000 words earlier in a conversation.
Implication for operators: Input quality directly determines output quality – not just the question you ask, but the structure, context, and format of everything in the prompt window. An LLM reviewing a well-structured specification document will produce materially better output than the same model reviewing a disorganized one.
Transformers are pre-trained on massive text corpora, then fine-tuned using human feedback (RLHF). The fine-tuning layer is what makes a base model into Claude or ChatGPT. Different fine-tuning = different behavior profiles.
GANs and VAEs: Other Generative Architectures
GANs (Generative Adversarial Networks): Two models compete – a generator creates synthetic output, a discriminator evaluates it against real examples. Applications include image generation, synthetic data, design tools. For operators: AI-generated property renderings and architectural visualizations are GAN outputs. Liability exposure around synthetic imagery that misrepresents finished conditions is real and underaddressed in CRE and AEC.
VAEs (Variational Autoencoders): Compress data into a latent representation then reconstruct new data from it. Used for image denoising, data compression, quality enhancement. Applications include construction document quality improvement from low-resolution inputs and satellite/LiDAR data enhancement in natural resources.
The Dynamic Output Problem
Generative AI models continuously learn from new data, meaning the same tool deployed today will produce different outputs in 90 days. Implication for operators: Static QA checklists applied to dynamic models are already outdated when you implement them. Your verification and governance frameworks need version-controlled outputs and audit trails – the same discipline applied to engineering calculations must now apply to AI-generated technical documents.
Quick Check
Why does input document quality directly affect LLM output quality?
A: Better documents make the AI "smarter"
B: The Transformer's attention mechanism weights every element of the input against every other element – structure and context in the input shape what the model attends to during generation (Correct)
Day 2 Challenge
Identify one high-value document your organization produces regularly – a proposal, a report, a lender package. Evaluate it against the Transformer input quality principle: is it structured in a way that makes relevant information easy for an attention mechanism to identify and weight? What would you change?
At the practitioner level, prompting is a skill. At the operator level, it is a system design problem. The question is not "how do I write a better prompt?" It is "how do I build prompt infrastructure that produces reliable, auditable, scalable output across my organization?"
The 6-Part Framework – Operator Application
ROLE – Roles are standardized across use cases. Document canonical roles for each output type (analyst, underwriter, technical writer, compliance reviewer) and use them consistently.
CONTEXT – Inject context programmatically where possible. Template variables ([CLIENT_NAME], [PROJECT_TYPE], [JURISDICTION]) that pull from your CRM – not manually typed each time.
TASK – Task specifications are tested and versioned. You know which tasks produce reliable output and which require human review gates.
FORMAT – Output format is standardized to downstream workflow requirements. Consistency is the goal, not variety.
RULES – Rules encode your organization's quality standards, compliance requirements, and brand voice. Embedded in the template, not improvised per prompt.
EXAMPLES – Maintain a library of high-quality example outputs – past deliverables that hit the standard – and include them as context in high-stakes prompts.
The Priming Technique
Two-step sequence for complex analytical outputs:
Step 1: Ask a broad conceptual question to load relevant domain knowledge. "What are the primary factors that drive cap rate compression in secondary CRE markets?"
Step 2: Follow with the specific request, referencing the principles just generated. "Using those factors as a framework, analyze the acquisition risk for a retail strip center in [market] at a [X]% cap rate."
Cross-domain sequences:
AEC: prime with leading causes of infrastructure cost overrun → then request a risk identification framework for a specific project phase
Natural Resources: prime with regulatory and commodity factors creating financing windows → then request a capital raise strategy
Technology: prime with psychological factors driving SaaS churn → then request a retention intervention sequence
The Confidence Tier Instruction
Add to any prompt where accuracy matters for a professional deliverable:
"After each factual claim, rate your confidence: Virtually Certain (95%+), Highly Confident (80–95%), Moderate (60–80%), or Speculative (under 60%). Flag any Speculative claims for external verification."
In financing packages at the $1M–$100M+ level, a speculative claim that passes undetected can derail a lender relationship.
Quick Check
Your team is using AI to draft lender packages. Outputs are inconsistent. What is the most likely root cause?
A: The AI model is unpredictable
B: Team members need more prompting practice
C: Prompts are not standardized – each team member is improvising role, context, format, and rules differently (Correct)
The fix is a standardized prompt template with documented variables, embedded rules, and example outputs – not more individual practice.
Day 3 Challenge
Pick one high-value output type your organization produces regularly. Build a full 6-part prompt template: canonical role, context variables in [brackets], specific task definition, format specification matching downstream requirements, rules encoding your quality and compliance standards, and an example output. Test it on three real cases. Document the failure modes.
An AI agent is a system that can: (1) receive a goal or trigger, (2) break the goal into steps, (3) execute those steps using tools (web search, code execution, API calls, file operations), (4) pass outputs to the next step or to a human review gate, and (5) complete the workflow without requiring a human at each step. A prompt produces text. An agent can produce text, run a calculation, search the web, update a spreadsheet, and send an email – in sequence.
The No-Code Path: n8n
n8n is a no-code workflow orchestration tool that connects AI models to your existing systems – Gmail, Google Sheets, Slack, CRMs, project management tools – without requiring engineering resources.
Proven operator use cases:
Lead qualification pipeline: new contact enters CRM → AI evaluates against criteria → qualified leads routed, unqualified archived with reason code
Invoice processing: invoice received by email → AI extracts line items → populated into accounting system → exceptions flagged for human review
Content pipeline: source material committed to repository → AI generates structured output → validation step → output published or queued for review
Compliance monitoring: regulatory source monitored → changes detected → AI summarizes impact → alert sent to relevant team member
The Prosper Systems production pipeline is a working example: email digest → scoring and routing → Claude API processing → validation → DOCX/image generation → GitHub commit → publish queue. Fully automated, human review at the output stage.
The Code Path: OpenAI Agents SDK
For operators with engineering resources, the OpenAI Agents SDK enables multi-agent systems where specialized agents handle different parts of a workflow, tool use within a single agent run, and handoffs between agents based on task type or confidence threshold.
Decision framework:
Single workflow, non-technical team → n8n
Multiple interconnected workflows, some engineering capacity → n8n + API calls
The Expense Tracking Agent – A Replicable Template
A simple but illustrative example: receipt capture via messaging app → agent categorizes and compiles → weekly/monthly spend report delivered automatically. The architecture is replicable for project cost tracking, subcontractor invoice processing in AEC, LP distribution reporting in CRE, and field expense management in natural resources. The principle: identify one high-value output that currently requires manual aggregation. Build an agent that handles the aggregation. Human reviews the output, not the inputs.
Quick Check
Your team spends 4 hours per week manually compiling project status reports from emails, spreadsheets, and project management tools. What is the right operator response?
A: Hire a part-time coordinator to do it faster
B: Ask each team member to use a better prompt when they write their updates
C: Build an agent workflow that pulls from source systems, generates the compiled report, and routes it to a human for final review (Correct)
The 4-hour task is the automation target. The human's job becomes reviewing the output, not producing it.
Day 4 Challenge
Identify one recurring task in your organization that fits the agent automation profile: happens regularly, requires pulling from multiple sources, produces a predictable output format, currently consumes disproportionate staff time. Sketch the workflow: trigger → steps → tools needed → output → human review gate. You do not need to build it today. Spec it clearly enough that someone could.
Relying on individuals to verify AI output before using it creates inconsistent quality control. Some team members are diligent, some are not. Under time pressure, verification steps get skipped. The solution is structural: embed verification checkpoints in the pipeline itself, before output reaches a client, lender, or regulator.
The Three-Gate Model
Gate 1: Source Verification (input stage) – Before AI processes source material, verify the source is current and authoritative. For regulatory content: is this the current version? For market data: is this the most recent available?
Gate 2: Output Validation (post-generation) – Automated checks before output proceeds: Does it meet the required format? Are all sections present? Are there obvious hallucination signals (dates that do not exist, names absent from source material, implausibly precise statistics)? The Prosper Systems pipeline implements this: validate_md.py runs after every Claude API call and halts the pipeline if the output fails structural checks.
Gate 3: Human Review (pre-delivery) – High-stakes outputs require human review before delivery. The human's job is scoped: check the specific categories most vulnerable to AI error for this output type – statistics, citations, regulatory references, client-specific details. Not a full reread of every word. Scoped review keeps the time cost manageable while addressing the actual risk surface.
Version Control for AI Outputs
AI models update. The same prompt submitted to the same model six months later may produce different output. For professional deliverables, document:
Model version used (e.g., claude-sonnet-4-6, gpt-4o-2026-03-xx)
Prompt version used (if prompt library is versioned)
Output date
Verification steps completed and by whom
This is the same discipline applied to engineering calculations, financial models, and legal documents. AI-generated professional content requires the same treatment.
The Liability Architecture
For CRE and AEC professionals, the verification pipeline is not just a quality tool – it is a liability management tool. When a lender, regulator, or client challenges an AI-assisted deliverable, the question they ask is: what was your review process? A documented, structured verification pipeline is a defensible answer. "I reviewed it and it looked right" is not. Firms that build verification documentation into their AI workflows now will be ahead of the regulatory and contractual requirements that are coming.
Quick Check
Your firm produces AI-assisted feasibility studies. A client finds a factual error in a delivered study. What is the right operational response?
A: Blame the AI tool and note that AI errors are industry-wide
B: Retrain staff on verification best practices
C: Audit the pipeline for where the verification gate failed, add a structured check for that error type, and document the updated process (Correct)
The error is a pipeline design signal, not a personnel failure. Fix the gate.
Day 5 Challenge
Map the verification architecture for one AI-assisted output type your organization currently produces. For each stage: What source verification happens before the AI call? What automated validation runs after? What does the human review gate check, specifically? What is documented for audit purposes? Identify the gaps. Prioritize the highest-risk one. Spec the fix.
Before deploying any AI tool or workflow in your organization, answer these:
Who owns the output? Every AI-generated deliverable needs a named human owner accountable for its accuracy. Define this before deployment, not after a failure.
What data is going in? Map what information enters each AI tool. Identify anything that is client confidential, personally identifiable, subject to NDA, or regulated (HIPAA, GDPR, CCPA, ITAR). Most enterprise AI tools store inputs. Know the data retention policy before you paste anything sensitive into any tool.
What is the human review requirement? Categorize outputs by stakes: low (AI output used directly), medium (reviewed by originator), high (independent human review required). Document and apply consistently.
What is the escalation path when AI produces something wrong? Define before deployment: who is notified, what is the correction process, how is the incident documented.
Data Privacy at the Operator Level
Many operators assume paid enterprise subscriptions come with data protection guarantees. The reality varies significantly by vendor and plan tier. Key questions:
Does this tool use my inputs for model training? (Many do by default; most allow opt-out)
Where is my data stored, and in what jurisdiction?
What is the data retention period?
Does my firm's existing data governance policy cover AI tool inputs?
Conservative operational standard: treat all external AI tools as if inputs may be retained. Design prompts and workflows to inject context at the task level rather than uploading raw confidential documents where possible.
Building Organizational AI Policy
At minimum, an organizational AI policy should address:
Permitted tools – which AI tools are approved, at what data sensitivity level
Prohibited inputs – what categories of information cannot enter external AI tools
Output ownership – who is accountable for AI-assisted deliverables
Verification requirements – what review is required before AI output is delivered to clients, submitted to regulators, or published
Incident reporting – what constitutes a reportable AI-related incident and what the reporting path is
Update and review cadence – how often the policy is reviewed given the pace of model and tool changes
Quick Check
A team member uses a free-tier AI chatbot to draft a section of a client proposal, including pasted excerpts from a confidential client financial model. What is the governance failure?
A: The team member used the wrong AI tool for proposal writing
B: The draft section was not reviewed before inclusion
C: Confidential client financial data was input into an external AI tool with unknown data retention practices – a potential data breach and contractual violation (Correct)
Day 6 Challenge
Audit one AI tool your organization currently uses: What data retention policy applies to your inputs? Does your firm's data governance policy address AI tool inputs? What is the most sensitive category of information that has entered this tool in the past 90 days? What changes, if any, are needed? Document your findings. This is the beginning of your organizational AI governance posture.
A single AI workflow saving 3 hours per week = 156 hours per year = nearly 4 full work weeks recaptured. Five concurrent workflows each saving 3 hours per week = 780 hours per year = nearly 20 work weeks across your team. The value is not in any single workflow. It is in the stack running in parallel.
The moat is not the tool. It is the accumulated institutional knowledge about how to use the tool – failure modes, prompt refinements, verification requirements, integration decisions – that competitors who start later begin from zero.
The ROI Standard – Tiered
The benchmark for a high-value AI deployment is not a single number – it is a tiered standard based on deployment type.
10:1 minimum. The floor. Derived from the foundational example: an 8-page finance options list expanded to 80 pages of structured content in 30 minutes. 80 divided by 8. If a workflow cannot clear 10:1, AI is not the right tool for it – deprioritize.
20:1 – daily workflow target. A repeatable prompt system running in production every day. GitHub-class automation. Practitioners who have built a working prompt library and tool stack operate at this level.
50:1 – full deployment. A production system designed once and running continuously – like this course website, built in one session and serving ongoing users. When the system does the work, the ratio scales with usage.
80:1 and above – exceptional deployment. The original example: one week of manual work completed in 30 minutes. Not every project reaches this, but when architecture, prompting, and workflow all align, it is achievable and repeatable.
When you spec a new deployment, ask: what is the realistic ROI multiple? Under 3:1 – deprioritize. Over 10:1 – build it now. Over 20:1 – make it a daily system. Over 50:1 – document it, scale it, and teach it.
Your 90-Day Deployment Roadmap
Days 1–30: Foundation
Finalize and document prompt library for the 5 highest-value recurring output types
Select and configure primary AI tools for each output type
Implement Gate 2 output validation for your highest-stakes workflow
Brief your team on data governance requirements
Days 31–60: Automation
Spec and build your first agent workflow (highest ROI target from Day 4)
Implement human review gate documentation for all medium and high-stakes outputs
Begin cross-tool validation practice for high-stakes deliverables
First policy draft: permitted tools, prohibited inputs, output ownership
Days 61–90: Institutionalization
Second agent workflow live
Prompt library version 1.0 formally released to team
First validation audit: are prompts being used consistently? Are verification gates running?
Organizational AI policy formally adopted
ROI measurement: document time saved across all active workflows
Final Challenge
Write a one-page deployment brief for your organization:
What are the 3 highest-ROI AI workflows you will deploy in the next 90 days?
What is the estimated time savings for each?
What verification architecture applies to each?
What governance requirements apply?
Who owns each workflow?
This is your operating plan. It is also the document that distinguishes you from organizations that are still deciding whether to take AI seriously.
What Comes Next
Track 3 is the foundation. The frontier for operators:
Multi-agent systems – agents that orchestrate other agents across complex workflows
Custom model fine-tuning – training models on your proprietary data for domain-specific applications
AI in client deliverables – embedding AI-assisted analysis as a product differentiator, not just an internal efficiency tool
Regulatory compliance – staying ahead of AI governance requirements as they formalize across CRE, AEC, financial services, and natural resources
The operators who engage with those questions now will define the standard their sectors are measured against in three years.