How to Build a Financial Knowledge Graph Step by Step Guide
A step-by-step guide explaining how to build a financial knowledge graph, its structure, and how it enables smarter analysis across modern financial systems.
Explore how generative AI models reason, why their “thinking” is an illusion, and how leveraging LLM reasoning can build more intelligent, reliable AI solutions.
Modern generative models learn patterns from their training data and, based on that learning, predict and generate new text, images, etc. Large Language Models (LLMs) are trained on vast datasets to predict the next word (element) in a sequence. As these models are scaled up in size, which includes the model parameters size and training data, they begin to generate well-formed and curated outputs. The resulting text is coherent, contextually consistent, and flows properly because the pieces are associated and well written. One step before LLMs, earlier language models were smaller and could not generate text with the same coherence and contextual awareness. As models have grown larger, a new class of reasoning-oriented models has also emerged, exhibiting stronger reasoning capabilities. However, the way an LLM “reasons,” while it only generates text, is not quite like humans. Humans reason by understanding a problem; an LLM reasons by association, drawing on patterns seen in its training data. Considering a math problem, an LLM may solve it because it has seen similar problems and can match patterns, rather than truly grasping the underlying mathematical concepts. Sometimes, this can lead to errors, if a problem is posed in a slightly different way, meaning outside the patterns the model learned, the association-based reasoning can break down. Research has shown that a model’s chain-of-thought can be reliable on familiar patterns but becomes fragile under even moderate changes, sometimes producing fluent yet logically inconsistent steps for out-of-distribution problems. One major factor in eliciting better reasoning from generative models is prompt design. If a prompt is effective and lays out steps, an LLM tends to follow that structure and can generate solutions to complex problems. Such prompts are known as Chain-of-Thought prompting, where the model is guided to produce a step-by-step solution path instead of jumping straight to the final answer. By breaking problems into intermediate steps, accuracy on reasoning tasks often improves.
A range of studies has pointed out that while today’s generative models can perform reasoning-like tasks, they often do so in a superficial way. They lack certain capabilities that robust reasoning requires. For instance, large language models have no explicit internal memory to reliably store intermediate results; they only have a limited prompt window. This means they can lose track of information in long, multi-step problems. They are also prone to hallucinations, which in reasoning tasks can appear as confidently stated but incorrect intermediate steps. Importantly, these models struggle with formal logical deduction and mathematical proofs, domains where each step must be exactly correct, not merely probable.
Now, taking one step further and looking at the capability of “thinking” and what exactly thinking means in generative-AI terms. When we talk about “thinking,” we enter a more conceptual and sometimes philosophical territory. “Thinking” in the context of AI is not a rigorously defined technical term; rather, it’s a colloquial way of asking whether the AI has anything like a mind, understanding, or conscious cognitive process. In everyday language, we might say a chess program is “thinking” about its next move, but we know it is following an algorithm. With generative AI, especially LLMs that can hold conversations and solve problems, it can feel even more like interacting with a thinking entity. Researchers often put the word “thinking” in quotes when referring to AI to signal that it is an analogy to human thought, not a literal equivalence.
So, what might “generative AI thinking” entail? One interpretation is the model’s internal chain-of-thought. As described above, we can prompt a model to produce its reasoning steps explicitly (e.g., “let’s think this through step by step”). Some advanced systems generate intermediate steps internally, essentially simulating a process before giving an answer. These models extend standard LLMs by incorporating a “reasoning process,” for example, using a chain-of-thought with reflection, to tackle a question. From one perspective, this is a form of thinking; the AI is deliberating internally instead of responding immediately. For instance, when asked a tricky riddle, a reasoning-enabled model might first enumerate possible interpretations, eliminate those that don’t fit, and only then produce the answer. This looks similar to how a human might mentally work through a problem.
A key aspect of human thinking is understanding and having internal mental models of concepts and situations. Do generative AI models understand what they talk about? An LLM does not know the meaning of “love” or comprehend what physical pain is; it only knows how people have written about “love” or “pain” in text. A large language model relies on how similar things have been written and on learned signals about what generations are likely to be rated “right” or “wrong,” and then produces an output accordingly. Because of this lack of genuine understanding or intent, AI “thinking” can go in different ways, sometimes on the wrong path. A striking phenomenon is that LLMs sometimes produce confident but wrong answers, also known as hallucinations. They might give a detailed explanation that sounds logical yet is completely fabricated. From the outside, this looks like a thought process that led to a conclusion, but it is actually the model following statistical cues to make its answer sound plausible. Generative AI “thinking” is, therefore, a metaphorical way to describe the internal processing of these models. We can encourage models to simulate a process, and indeed, they can produce remarkable facsimiles of reasoning.
So now, as we understand what LLM reasoning and “thinking” are, the next question is: how can we leverage the current state of LLMs with their reasoning capabilities to resolve complex problems? To build better solutions or automate processes with AI, we must first take a step back. The starting point is not the AI itself, but a deep understanding of the process we are trying to solve or automate. We need to know exactly how that process works, what should happen at each stage, where there is room for error, and where errors cannot be tolerated. Only with this foundation can we begin designing a system powered by AI. Simply having a high-level overview and assuming “AI will take care of most parts,” without true comprehension, is not enough. AI, especially standard LLMs or reasoning-based LLMs, is only as good as our own clarity and understanding of the problem. The first step is to truly understand the domain. Either develop a deep knowledge of the field yourself or collaborate with a domain expert to map out the complete workflow. Build the solution framework from their perspective, capturing the real-world nuances. The second step is to prepare this workflow in a way that aligns with AI capabilities (LLM or any other model). LLMs reason best when the intent is clear, the outcome is defined, and the steps to reach that outcome are unambiguous. By designing structured paths and layered workflows, we enable both humans and AI to collaborate effectively, creating systems that can automate solutions end to end. The final step is testing, rigorous and extensive testing. Once the solution is built, it must be validated through multiple rounds of iteration, checking each component carefully to ensure reliability and robustness.
This approach is how we create AI solutions that are strong and durable. By combining the reasoning capabilities of LLMs with the depth of human thinking, we can design AI workflows that deliver intelligent, resilient, and genuinely valuable solutions.
A step-by-step guide explaining how to build a financial knowledge graph, its structure, and how it enables smarter analysis across modern financial systems.
Discover how Financial Knowledge Graphs connect complex financial data, enhance AI-driven insights, and transform decision-making across the modern finance industry.
Working with We Better AI means teaming up with a group
that values open communication, measurable outcomes
and a supportive collaboration style. We take pride in
putting our expertise and experience into action for
companies aiming to stay on top in their industry, and our
history of success speaks for itself.
Discover how tailored AI can transform your business
and deliver a real return on investment. Book our free
consultation now, and feel free to submit a question
or comment.
