Mara Inglezakis | 21 November 2025
Automation, computing, and labor with hybrid AI
Knowing and doing
About me
• I work at Delta, but I’m speaking for myself
• Domain expertise: logistics, manufacturing,
retail, supply chain, sustainability
• Capabilities: data/information/enterprise
architecture; digitalization; data strategy;
knowledge management
• Background: epistemology… and Victorian
Studies!
• Contact: mara.inglezakis.owens@gmail.com
in/mara-inglezakis-owens
Automation, computing, and labor from textiles to chatbots
The future is history
Automation, computing, and labor: woven together from the (very) long 19th century
This is a story of: a high-skill profession that became automated
the emergence of automated computing from the automation of a high-skill profession
1771
Textile work
consolidates
in the mill
1785
Power-loom
introduced, automating
the HANDS of the
skilled laborer
(Cartwright)
1811 through 1820s
Labor movements
react against machines
and automation
1822
Difference Engine
v. 1 delivered
(Babbage)
1832
Modern division of labor
described in On the
Economy of Machinery
and Manufactures
(Babbage)
1837
Analytical engine
described (Babbage)
1839
‘Note G’, an
algorithm for
creating jacquard
fabric, delivered,
automating the
MIND of the skilled
laborer (Lovelace)
Today’s goal: help our organizations make pragmatic decisions about automation,
computing, and labor in the long 21st century
Human-computer interaction scaffolding for efficient and humane data/tech organizations
Background assumptions:
• What ontologies and knowledge graphs are good for
• What LLMs are good for
• Who feels good (and not so good) about interacting with them
Foreground assumptions:
• How the role of ontologies and knowledge graphs has evolved as a means of expression, particularly in the last
12-18 months
• How ontologies and knowledge graphs can address shortcomings associated with LLMs
• How pragmatically deployed LLMs support our colleagues
Encoding
Note on usage
• Originally from telegraphy: the act of creating rhythmic
references for alphabetic glyphs
• My usage, grounded in semiotics: the act of moving
between n-ary representations for some semantically
stable concept (n references, 1 referent)
“mobile”
“portable”
“κινητό”
References Referent
Who encodes what
What LLMs are good at
• Encoding work
• Well-understood tasks that involve translation, summary, and high
level comparison
• No idiom or attractive pointers to other concepts
• Work that does not expect routine outcome
• Work that is of relatively routine procedure
Who encodes what
Who is (and isn’t) open to labor-saving with LLMs
• My ONLY job is encoding  low openness
• The translating or encoding part of my job is not that important to me—
what’s important is what I can achieve with that encoding  high
openness
Who encodes what
What LLMs are good at (examples)
Address In Order To
• First-line customer service content (helpdesk, reservations)
• SMEs: First-line customer service workers (high openness)
Make sure that the customer calling or chatting a helpdesk or
reservations gets to the right service area without human intervention or
self-selection
• Physical world content (manufacturing plants, ingredients, aircraft
interiors)
• SMEs: Technicians and engineers (high openness)
Catalogue BOM and impacting processes; product ingredients under
standardized botanical terms
• Programming languages (e.g. SQL) and imperative or declarative code
content
• SMEs: an extended business intelligence community (low openness)
Compare one or more SQL codebases for topographical and broad
thematic similarities and differences
• Metacontent and information organization (information architectures;
document structures; iconographic norms)
• SMEs: Technicians and engineers (high openness)
Transpose built environment content from image-intensive PDF to text-
intensive CSV and TTL
Who knows what
What ontologies and knowledge graphs are good at
• Supporting agents (human +) who need to understand specialized
knowledge so that they can work
• This knowledge is difficult to access because:
• It’s ill-documented
• It isn’t documented at all
Who knows what
Who is (and isn’t) open to labor-saving through ontologies and knowledge graphs
• My ONLY job is knowing stuff  low openness
• The ‘knowing’ part of my job is not that important
to me; what’s important is what I do with that
knowing  high openness
Who encodes what
What ontologies and knowledge graphs are good at (examples)
Address In Order To
• Domain: High-context, structurally-aware about business procedures and
requirements
• SMEs: Team of standards experts (low openness)
Automate the writing of fulfillment parameters for highly-regulated products
• Domain: Retail product offerings like hardgoods
• SMEs: Merchandisers, PIM, product taxonomy workers (low openness)
Merge product taxonomies for multiuse deployment (browse and search)
• Domain: IT ecosystems from network to data where they show up in integrations and
datastores
• SMEs: an extended business intelligence community; an engineer/technical
architecture community (mixed)
Monitor complex IT ecosystem health; automate integrations
• Domain: Customer and user interaction with business agents and systems over time
• SMEs: Operations workers (high openness)
Automate the operator
• Domain: Physical worlds manufacturing plants, aircraft interiors, document structures
• SMEs: site experts (high openness)
Automate digitalization process
Putting ‘who knows what’ to work
It’s the interface, stupid!
Why LLMs
• LLMs have taken off as commercial products because of the discursive natural
language interface
• Human beings have been trained to accept asynchronous, short-form textual
communication (‘chat’) for 25 years
• Straight text chat apps (AIM; ICQ)  App-specific chats (G-Chat)  content-
rich chat apps (e.g. Teams; Snapchat)-—and, of course, SMS/MMS
Why not ontologies and knowledge graphs, historically
• Ontologies and knowledge graphs have required ‘difficult’ encoding; dearth of
approachable interfaces
!
Putting ‘who knows what’ to work
It’s the interface, stupid!
Why ontologies and KG NOW
• By relieving the SME of the labor of encoding through its discursive
natural language interface, the LLM empowers the SME:
• As ontologist
• As ontology beneficiary
• The ontology (and its author(s)) introduces traditional documentation
techniques into AI, making LLMs more useful to our organizations
• Ontology as simulacrum for what the SME knows about a domain
(descriptive function)
• Ontology as a simulacrum for how the SME knows what to do with a domain
(constraint function)
+
10+ years of ontologies and LLMs in (mostly) large organizations
Lessons Learned
More neatness, less scruffiness
Lessons Learned
•LLM-assisted work needs more people attuned to concept and planning, less “throw stuff at
the wall and see if it sticks"
•Domain-specific
•Quality + reliability
Be the master manufacturer
Lessons Learned
•LLMs
fi
nd the next right-ish word in a phrase, full stop
•LLMs struggle with ordinality, including counting
•Humans or reliable agents acting on behalf of humans must:
•Create plans
•Evaluate performance at each step of execution
Be the hiring manager (or the UX expert)
Lessons Learned
•Develop personae for different task families for your LLMs to use
•Job description
•Success metrics
•Job aids
Rhetoric and composition: everyone’s core job skill
Lessons Learned
• Knowing is not enough; you must do it in writing
• Those who write clearly, precisely, and thoroughly win
• Those who do not write clearly, precisely, and thoroughly lose
This is a documentation problem—not a technology problem
Lessons Learned
•Build on colloquial documentation behaviors
•Strong documentation practice is the cornerstone of your agentic strategy
•Care for your colleagues (Luddites included)
Quantify it right
Lessons Learned
•Quantify your objects—ground your symbols in a way that directly re
fl
ects how they exist the world:
•In the ontology
•Logically consistent de
fi
nitions
•De
fi
nitions with consistent depth and topology of logical expression
•Long-form text descriptions
•In the underlying resource
•Datastores, the web, and all classi
fi
cation systems use mathematical logic, too
You need strong executive sponsorship
Lessons Learned
•Robust AI strategy impacts org design and expenditure
•Executives are the authority on org design and expenditures
•Speak their language even if it isn’t yours
Tie your work to the ‘P/L’
Lessons Learned
•Org design shapes expenditure
•Capital expense (e.g. long term programs; some tech; may include FTE)
•Operational expense (e.g. itinerant labor and short-term programs; some tech)
•Quantify direct/indirect pro
fi
ts
Your most powerful, least-prepared ally: your legal team
Lessons Learned
•Vendors and customers want your intellectual property (IP), especially if you have big or unique market share
•Your legal team cannot protect your IP unless they understand the full scope
• Schematic information
• Algorithms and statistical procedures
• Data used to fulfill an algorithm or statistical procedure
• Fruits of any algorithm or statistical procedure
• Prompts and personae
• Usability guarantees for prompts and personae
Putting ontologies and LLMs to work in your organization
Best Practices
Impact, not volume
Best Practices
•Talk about the business impacts of your work, not the volume
•“Productivity” does not make money
•Non-imperative output has new value:
•Long-form ecosystem, policy, procedure documentation
•Ontologies
•Prompts or prompt templates
1) Define use case, 2) choose aesthetic, 3) define ontology
Best Practices
Human consumers—world of ideas (idea > form) Machine consumers—world of forms (form > idea)
Descriptive
Natural-language-rich description or context for people (the
‘semantic layer’)
Helps with: Data discovery for catalog-like functions
(‘Packaged Good’ and ‘Retail Item’ are equivalent to ‘SKU’)
Unambiguous context for machines (clear relation between
references and referents)
Helps with: Concept coherence in complex domains
(‘SKU’ is not a ‘Product’ though ‘Product’ is always a part of ‘SKU’; the
set of references for ‘SKU’ are disjoint from the set of references for
‘Product’)
Constraining
Fact discovery via entailment/reasoning on some well-understood
domain
Helps with: entailment discovery in well-known domains
(‘Should I store equine feed in the bin after cattle feed without
cleaning it out’?)
Rule specification (the world of fact interperable as norms)
Helps with: telling the LLM how to ‘read’ specialized corpuses
(‘Lavatories are usually annotated with a straight line attached to a
closed half oval and diagrammed adjacentTo Galleys’)
Publish ontology design patterns and stick to them
Best Practices
•Faster development
•Easier to automate
•Higher usability for humans and machines
Publish ontology design patterns and stick to them
Best Practices
Especially important for:
•New class or property declaration; deprecation
•De
fi
ning Class-to-Class facts and Class-to-Literal facts
•Describing events, change over time, and causality
Give your ontology good UX
Best Practices
• Who or what is using the ontology?
• What do they want to accomplish—why?
• Only real user feedback can validate UX!
Embrace quality and reliability practices
Best Practices
• SDLC (phenomenological and ontological)
• Logical consistency
• No surprise entailments
• Evaluation must always be part of the plan
• Human or other reliable agent in the loop
• Ontology as Q/RE tool
Embrace quality and reliability practices: profile yourself
Best Practices
Low Conservatism High Conservatism
How much do we need to… Low control High control
What would happen if we… High risk tolerance Low risk tolerance
Can’t sleep at night if… Something to gain Something to lose
I love my legal team… Low regulation High regulation
Thank you!

Knowing and Doing: Knowledge graphs, AI, and work

  • 1.
    Mara Inglezakis |21 November 2025 Automation, computing, and labor with hybrid AI Knowing and doing
  • 2.
    About me • Iwork at Delta, but I’m speaking for myself • Domain expertise: logistics, manufacturing, retail, supply chain, sustainability • Capabilities: data/information/enterprise architecture; digitalization; data strategy; knowledge management • Background: epistemology… and Victorian Studies! • Contact: [email protected] in/mara-inglezakis-owens
  • 3.
    Automation, computing, andlabor from textiles to chatbots The future is history
  • 4.
    Automation, computing, andlabor: woven together from the (very) long 19th century This is a story of: a high-skill profession that became automated the emergence of automated computing from the automation of a high-skill profession 1771 Textile work consolidates in the mill 1785 Power-loom introduced, automating the HANDS of the skilled laborer (Cartwright) 1811 through 1820s Labor movements react against machines and automation 1822 Difference Engine v. 1 delivered (Babbage) 1832 Modern division of labor described in On the Economy of Machinery and Manufactures (Babbage) 1837 Analytical engine described (Babbage) 1839 ‘Note G’, an algorithm for creating jacquard fabric, delivered, automating the MIND of the skilled laborer (Lovelace)
  • 5.
    Today’s goal: helpour organizations make pragmatic decisions about automation, computing, and labor in the long 21st century Human-computer interaction scaffolding for efficient and humane data/tech organizations Background assumptions: • What ontologies and knowledge graphs are good for • What LLMs are good for • Who feels good (and not so good) about interacting with them Foreground assumptions: • How the role of ontologies and knowledge graphs has evolved as a means of expression, particularly in the last 12-18 months • How ontologies and knowledge graphs can address shortcomings associated with LLMs • How pragmatically deployed LLMs support our colleagues
  • 6.
    Encoding Note on usage •Originally from telegraphy: the act of creating rhythmic references for alphabetic glyphs • My usage, grounded in semiotics: the act of moving between n-ary representations for some semantically stable concept (n references, 1 referent) “mobile” “portable” “κινητό” References Referent
  • 7.
    Who encodes what WhatLLMs are good at • Encoding work • Well-understood tasks that involve translation, summary, and high level comparison • No idiom or attractive pointers to other concepts • Work that does not expect routine outcome • Work that is of relatively routine procedure
  • 8.
    Who encodes what Whois (and isn’t) open to labor-saving with LLMs • My ONLY job is encoding  low openness • The translating or encoding part of my job is not that important to me— what’s important is what I can achieve with that encoding  high openness
  • 9.
    Who encodes what WhatLLMs are good at (examples) Address In Order To • First-line customer service content (helpdesk, reservations) • SMEs: First-line customer service workers (high openness) Make sure that the customer calling or chatting a helpdesk or reservations gets to the right service area without human intervention or self-selection • Physical world content (manufacturing plants, ingredients, aircraft interiors) • SMEs: Technicians and engineers (high openness) Catalogue BOM and impacting processes; product ingredients under standardized botanical terms • Programming languages (e.g. SQL) and imperative or declarative code content • SMEs: an extended business intelligence community (low openness) Compare one or more SQL codebases for topographical and broad thematic similarities and differences • Metacontent and information organization (information architectures; document structures; iconographic norms) • SMEs: Technicians and engineers (high openness) Transpose built environment content from image-intensive PDF to text- intensive CSV and TTL
  • 10.
    Who knows what Whatontologies and knowledge graphs are good at • Supporting agents (human +) who need to understand specialized knowledge so that they can work • This knowledge is difficult to access because: • It’s ill-documented • It isn’t documented at all
  • 11.
    Who knows what Whois (and isn’t) open to labor-saving through ontologies and knowledge graphs • My ONLY job is knowing stuff  low openness • The ‘knowing’ part of my job is not that important to me; what’s important is what I do with that knowing  high openness
  • 12.
    Who encodes what Whatontologies and knowledge graphs are good at (examples) Address In Order To • Domain: High-context, structurally-aware about business procedures and requirements • SMEs: Team of standards experts (low openness) Automate the writing of fulfillment parameters for highly-regulated products • Domain: Retail product offerings like hardgoods • SMEs: Merchandisers, PIM, product taxonomy workers (low openness) Merge product taxonomies for multiuse deployment (browse and search) • Domain: IT ecosystems from network to data where they show up in integrations and datastores • SMEs: an extended business intelligence community; an engineer/technical architecture community (mixed) Monitor complex IT ecosystem health; automate integrations • Domain: Customer and user interaction with business agents and systems over time • SMEs: Operations workers (high openness) Automate the operator • Domain: Physical worlds manufacturing plants, aircraft interiors, document structures • SMEs: site experts (high openness) Automate digitalization process
  • 13.
    Putting ‘who knowswhat’ to work It’s the interface, stupid! Why LLMs • LLMs have taken off as commercial products because of the discursive natural language interface • Human beings have been trained to accept asynchronous, short-form textual communication (‘chat’) for 25 years • Straight text chat apps (AIM; ICQ)  App-specific chats (G-Chat)  content- rich chat apps (e.g. Teams; Snapchat)-—and, of course, SMS/MMS Why not ontologies and knowledge graphs, historically • Ontologies and knowledge graphs have required ‘difficult’ encoding; dearth of approachable interfaces !
  • 14.
    Putting ‘who knowswhat’ to work It’s the interface, stupid! Why ontologies and KG NOW • By relieving the SME of the labor of encoding through its discursive natural language interface, the LLM empowers the SME: • As ontologist • As ontology beneficiary • The ontology (and its author(s)) introduces traditional documentation techniques into AI, making LLMs more useful to our organizations • Ontology as simulacrum for what the SME knows about a domain (descriptive function) • Ontology as a simulacrum for how the SME knows what to do with a domain (constraint function) +
  • 15.
    10+ years ofontologies and LLMs in (mostly) large organizations Lessons Learned
  • 16.
    More neatness, lessscruffiness Lessons Learned •LLM-assisted work needs more people attuned to concept and planning, less “throw stuff at the wall and see if it sticks" •Domain-specific •Quality + reliability
  • 17.
    Be the mastermanufacturer Lessons Learned •LLMs fi nd the next right-ish word in a phrase, full stop •LLMs struggle with ordinality, including counting •Humans or reliable agents acting on behalf of humans must: •Create plans •Evaluate performance at each step of execution
  • 18.
    Be the hiringmanager (or the UX expert) Lessons Learned •Develop personae for different task families for your LLMs to use •Job description •Success metrics •Job aids
  • 19.
    Rhetoric and composition:everyone’s core job skill Lessons Learned • Knowing is not enough; you must do it in writing • Those who write clearly, precisely, and thoroughly win • Those who do not write clearly, precisely, and thoroughly lose
  • 20.
    This is adocumentation problem—not a technology problem Lessons Learned •Build on colloquial documentation behaviors •Strong documentation practice is the cornerstone of your agentic strategy •Care for your colleagues (Luddites included)
  • 21.
    Quantify it right LessonsLearned •Quantify your objects—ground your symbols in a way that directly re fl ects how they exist the world: •In the ontology •Logically consistent de fi nitions •De fi nitions with consistent depth and topology of logical expression •Long-form text descriptions •In the underlying resource •Datastores, the web, and all classi fi cation systems use mathematical logic, too
  • 22.
    You need strongexecutive sponsorship Lessons Learned •Robust AI strategy impacts org design and expenditure •Executives are the authority on org design and expenditures •Speak their language even if it isn’t yours
  • 23.
    Tie your workto the ‘P/L’ Lessons Learned •Org design shapes expenditure •Capital expense (e.g. long term programs; some tech; may include FTE) •Operational expense (e.g. itinerant labor and short-term programs; some tech) •Quantify direct/indirect pro fi ts
  • 24.
    Your most powerful,least-prepared ally: your legal team Lessons Learned •Vendors and customers want your intellectual property (IP), especially if you have big or unique market share •Your legal team cannot protect your IP unless they understand the full scope • Schematic information • Algorithms and statistical procedures • Data used to fulfill an algorithm or statistical procedure • Fruits of any algorithm or statistical procedure • Prompts and personae • Usability guarantees for prompts and personae
  • 25.
    Putting ontologies andLLMs to work in your organization Best Practices
  • 26.
    Impact, not volume BestPractices •Talk about the business impacts of your work, not the volume •“Productivity” does not make money •Non-imperative output has new value: •Long-form ecosystem, policy, procedure documentation •Ontologies •Prompts or prompt templates
  • 27.
    1) Define usecase, 2) choose aesthetic, 3) define ontology Best Practices Human consumers—world of ideas (idea > form) Machine consumers—world of forms (form > idea) Descriptive Natural-language-rich description or context for people (the ‘semantic layer’) Helps with: Data discovery for catalog-like functions (‘Packaged Good’ and ‘Retail Item’ are equivalent to ‘SKU’) Unambiguous context for machines (clear relation between references and referents) Helps with: Concept coherence in complex domains (‘SKU’ is not a ‘Product’ though ‘Product’ is always a part of ‘SKU’; the set of references for ‘SKU’ are disjoint from the set of references for ‘Product’) Constraining Fact discovery via entailment/reasoning on some well-understood domain Helps with: entailment discovery in well-known domains (‘Should I store equine feed in the bin after cattle feed without cleaning it out’?) Rule specification (the world of fact interperable as norms) Helps with: telling the LLM how to ‘read’ specialized corpuses (‘Lavatories are usually annotated with a straight line attached to a closed half oval and diagrammed adjacentTo Galleys’)
  • 28.
    Publish ontology designpatterns and stick to them Best Practices •Faster development •Easier to automate •Higher usability for humans and machines
  • 29.
    Publish ontology designpatterns and stick to them Best Practices Especially important for: •New class or property declaration; deprecation •De fi ning Class-to-Class facts and Class-to-Literal facts •Describing events, change over time, and causality
  • 30.
    Give your ontologygood UX Best Practices • Who or what is using the ontology? • What do they want to accomplish—why? • Only real user feedback can validate UX!
  • 31.
    Embrace quality andreliability practices Best Practices • SDLC (phenomenological and ontological) • Logical consistency • No surprise entailments • Evaluation must always be part of the plan • Human or other reliable agent in the loop • Ontology as Q/RE tool
  • 32.
    Embrace quality andreliability practices: profile yourself Best Practices Low Conservatism High Conservatism How much do we need to… Low control High control What would happen if we… High risk tolerance Low risk tolerance Can’t sleep at night if… Something to gain Something to lose I love my legal team… Low regulation High regulation
  • 33.