People Intelligence: The Next AI Superpower — and Its Moral Hazard
1. The New Privacy Crisis
Public concerns over privacy and personal information have been discussed extensively, but with no real solutions in sight. People continue giving away personal information in exchange for convenience. Collecting and selling personal information is a multi-billion-dollar business. Big tech companies have profited not only from selling to advertisers, but also from capitalizing on human psychology — from keeping users on platforms longer through behavioral loops, to what whistleblowers revealed about purposefully designed algorithms that pushed divisive and enraging content to prolong engagement and maximize advertising revenue.
In addition to advertising revenue, social media posts, blogs, and other digital content have been used as training datasets for AI models, ripping off creators while allowing a handful of “AI-cartel” companies and individuals to gain unprecedented wealth and power.
Despite making huge profits off personal information, many companies have mishandled user data. Numerous large-scale hacks have made more personal information available to scammers and criminals. Robocalls have become a common nuisance, and identity theft is rampant.
And now, a new kind of privacy crisis is emerging. With the ability to crawl personal information from public sources, aggregating that information is becoming easier than ever. New AI startups are focused on collecting and aggregating data for people intelligence — the ability to map decision-making chains in companies and profile individuals for sales and marketing purposes. The risk of such information being abused or used for manipulation and scams is imminent.
Will this new “people intelligence,” built by AI, become the next superpower for corporations — or fall into the hands of scammers? Will it be used for bullying or blackmail? The understanding of human behavior is a powerful tool. But it is also a dangerous one. Nobody wants to be treated as a data point — but hasn’t that already happened?
2. From Bridgewater to the Corporate World: When People Became Data
Long before AI automated insight, Ray Dalio’s Bridgewater Associates experimented with “Baseball Cards” — a structured way to capture employee attributes and decision-making tendencies. The idea was to make subjective impressions visible and consistent. Some saw it as a breakthrough in organizational transparency; others viewed it as an uncomfortable reduction of people to data.
But the deeper lesson was this: organizations are taught to find the “best people” to maximize corporate performance. And to do that, they need to measure performance metrics — from KPIs to “baseball card” stats. Haven’t we already started in sports, from best-selling books like Moneyball to fantasy sports leagues and sports betting apps built on human performance stats?
Bridgewater’s internal system required enormous manual effort. The latest AI makes that trivial. What originated as a reflective management tool has become the blueprint for a capability that can now be deployed on anyone, anywhere, without their knowledge.
3. Why CRMs Fail — and What People Intelligence Replaces
Today’s corporate infrastructure for understanding people is both omnipresent and profoundly shallow. Customer Relationship Management platforms dominate sales organizations. They track deals, log calls, and forecast revenue. But they understand very little about humans. CRMs assume organizations operate like pipelines; actual organizations function more like ecosystems.
The real dynamics — alliances, skeptics, blockers, quiet influencers — live outside any CRM field. These systems can tell you what happened, but never why, never how, and never who truly matters. They record motion but not meaning, contact but not context.
Salesforce’s AI 'Einstein' still focuses on product-layer automation instead of the powerful insights that come from people intelligence.
People Intelligence emerges precisely in the vacuum CRMs leave behind. It reads the public web not as content but as signal: patterns of leadership, linguistic style, personality markers, reputation arcs, political alliances, intellectual influences, career trajectories, and the invisible threads connecting people across institutions.
A comparison illustrates the shift:
| Dimension | CRM Systems | People Intelligence |
|---|---|---|
| Object of analysis | Deals & activity logs | Humans & influence networks |
| Data source | Manually entered corporate data | Public web, social graphs, inferred context |
| Core function | Tracking & forecasting | Interpretation, prediction, behavioral mapping |
| Blind spot | Human complexity | Ethical boundaries |
CRMs tell you who you talked to.
People Intelligence tells you who you should talk to, who matters most — and how to talk to them.
4. When OSINT Becomes a Corporate Skill
People Intelligence is not an invention of the tech industry. It is an inheritance from intelligence agencies, investigative journalism, and national security. OSINT — open-source intelligence — has long been used by investigators through tools like Maltego to map networks, Pipl to resolve identity fragments, and link-analysis software to uncover relationships.
What has changed is not the existence of OSINT, but its accessibility. What once required trained analysts now requires nothing more than a browser and an LLM. Public data becomes insight; insight becomes advantage. AI models trained to detect weak signals can read emotional tone, leadership style, political alignment, professional anxieties, and relational dynamics from patterns humans overlook.
Then come deeper layers: behavioral science. CIA interrogation frameworks analyze baseline shifts and stress cues. Chase Hughes’ behavioral influence models identify markers of authority, insecurity, compliance, and resistance. None of these systems were built for corporate use, yet their logic is easily repurposed when language models detect micro-patterns across thousands of public statements.
Companies do not need espionage. They only need the breadcrumbs people leave willingly.
5. The Ethical Boundary No One Has Defined
The ethical risk of People Intelligence is not merely privacy — it is asymmetry. Individuals rarely know they are being modeled, much less how deeply. They cannot audit what has been inferred, nor correct what is inaccurate. Organizations wield understanding without accountability; subjects bear consequences without visibility.
And the stakes extend beyond sales. People Intelligence can shape hiring decisions, negotiation strategies, crisis management, partnership targeting, investor relations, and even governmental affairs. When an AI model creates a psychological portrait of a person — accurate or not — it becomes a silent participant in decisions that shape careers, reputations, and opportunity.
Here, the questions multiply faster than the answers:
Is it ethical to model someone’s psychology without consent?
Is a person entitled to know the inferences drawn about them?
Does a company have the right to act on behavioral predictions a person never disclosed?
Who is accountable when an AI-generated profile becomes self-fulfilling?
What happens when those with power understand people more deeply than people understand themselves?
These questions are not abstractions. They are the structural challenges of a world where inference becomes the currency of influence.
6. The Strategic Temptation — and the Civilizational Risk
Companies will adopt People Intelligence because it works. It shortens sales cycles, identifies invisible stakeholders, predicts objections, and reveals paths of least resistance. In competitive markets, every insight becomes an advantage. But the more organizations rely on AI-mediated human understanding, the more they risk something harder to quantify: legitimacy.
The same tools that help a salesperson navigate a decision chain could help a corporation analyze political figures, media personalities, activists, employees, or communities. The line between strategy and manipulation shrinks as the resolution of insight sharpens.
People Intelligence democratizes power in one direction — giving small teams capabilities once reserved for intelligence agencies — while centralizing it in another. Those who adopt it gain leverage. Those who don’t become legible to others while remaining blind themselves.
This asymmetry is the real danger.
7. The Reboot Principle: Power Without Ethics Becomes Exploitation
The rise of People Intelligence signals a new era in organizational capability. Yet the fundamental question is not technological but moral. A society that allows unrestricted behavioral inference risks normalizing a world where people become transparent while institutions grow opaque. Orwell imagined surveillance as a centralized tyranny; what he could not have predicted is a world where the “telescreen” lives in every pocket — and where people willingly feed it with posts, clicks, searches, and traces of their private lives. Today, organizational AI can profile anyone from public signals alone. The challenge is not to prevent organizations from understanding people — humans have always tried to do that — but to ensure this understanding does not eclipse dignity or autonomy.
People Intelligence, like any powerful tool, must be governed before it governs us. The future will belong to institutions that refuse to treat understanding as entitlement, and choose instead to treat it as responsibility.
Read more
Ops Intelligence and the Rise of Decision Intelligence: Lessons From Palantir’s Evolution
Palantir pioneered Ops Intelligence for the enterprise. But the next wave—Decision Intelligence—will be defined by AI-native tools that help smaller organizations find leverage, not just optimize scale.
LLM’s True Transformative Power Is Qualitative, Not Quantitative
The last cycle optimized what could be measured. This cycle will optimize what can be understood. Why LLMs unlock a qualitative revolution that reshapes decision-making.
Rebooting Identity: The Economy of Data and Data Sovereignty in the Age of AI
Our identity infrastructure was never designed for a world of irreversible data leaks, platform extraction, and AI-driven impersonation. This essay outlines why identity must evolve into a sovereign, multi-layered, cryptographically anchored asset.