Large Language Models (LLMs) utilizing the Transformer architecture have become prevalent in the past few years. LLMs that are designed for general interaction are trained to sound human-like and conversational in their output. However, LLMs can also be trained in specialised knowledge. These LLMs may trade off conversationality and reasoning for the capacity to dig into data related to the domain.
This article will look at LLMs in trading but with a focus on using 'off-the-shelf' LLMs to assist with trading. This is partly because domain-specific LLMs are only beginning to appear at brokerages and also because general LLMs tend to emulate human-like features such as conversation and reasoning, which might be more useful in an open-ended task like trading than domain-focused output.
A revolution has happened in human-computer interaction (HCI) in recent years, gathering pace in the past year or so. This breakthrough is built on years of research and development in Artificial Intelligence (AI). AI is not new, and many of today's algorithms date back to the '80s and '90s. These include neural nets and search algorithms. Neural nets can be roughly seen as a way to train a computer system on data and have it generalise on that data, so it can make inferences, rather than applying rules.
AI has two sides, one based on rule sets, namely rule-based reasoning and the other side based on training on data and then making inferences from that data to related tasks. Search comes into this in both areas, as it offers a way to fine-tune the results of the computations in the case of neural net outputs and to power rule generation, based on rule sets.
Rule-based reasoning is effective but limited as it constricts information to the rule set and requires that rule sets be made in the first place, and becomes impossible for naturalistic natural language processing and reasoning, due to explosions in the complexity of the rule sets. Attempts to get around these limitations have included research in qualitative and fuzzy reasoning.
However since 2000, there have been ongoing advances in computer hardware capacity and in algorithms and methods to train computer systems on data (namely machine learning). This has enabled neural nets to come into their own (augmented by search). So the capacity to train and reason, without having to create or be bound by rules sets has increased over the past decade or so.
A key issue in this kind of process is how to tell the software what to focus on. So algorithms were developed that help a neural net to focus or pay attention to particular parts of the data, depending on the current input. A successful version of this architecture is called the Transformer. It utilises the concept of attention, which is derived from weights placed on data.
What this algorithmic advance has enabled, is for neural nets to be trained on vast datasets, create a kind of image of this data, and then using the Transformer architecture at its core, create inferences, that is reason on this data and do it in a way which seems human-like. In effect, it has allowed neural nets to train on data which helps make them seem intelligent. Thus were born what are termed Large Language Models (LLMs), which are typically based on the Transformer model, or versions of it.
Trading is at its core an intelligent process, but it is not necessarily proceeded with in this manner. But it can be seen that something that can reason and is trained on everyday data, the kind of data that traders may be exposed to, might be useful in ways such as confirming ideas, generating ideas, and other kinds of brainstorming. It is also possible to train LLMs on specific types of data. However, due to the limitations of the technology, there may be trade-offs in terms of that core capacity for human-like conversation and human-like reasoning which LLMs can display vs domain-specific training. So what can be done with an off-the-shelf LLMs, if you are a trader?
LLMs and their GPU powerhouse
To try and work out what something can do, we need to understand how it works. The important elements of an LLM are the dataset it is trained on, the hardware it runs on, and the complexity of its representation of its data. One way to appreciate this is to try and download an LLM from Hugging Face onto your computer. For an LLM of acceptable complexity, it needs to be run on the cloud in a data center. While technically you can download an LLM on a powerful desktop with GPUs (Graphics Processing Units), even here it will likely be slow and not very effective in its output, especially for the more useful, complex models.
One core reason NVIDIA is doing well is because it supplies the GPUs used in data centers to power LLMs. Having a desktop with a GPU is far from having a data center with stacks of powerful GPUs. So the takeaway is that LLMs require vast hardware resources to run effectively. As the complexity of their representation increases, which is what is happening, then the demands on data centers and GPUs increase as well, ameliorated by improvements in the efficiency of the underlying algorithms (e.g. trade-offs in speed vs reasoning).
Is trading intelligent?
Is trading an intelligent process? Well, the author of The Intelligent Investor might like to think that investing should be. Is trading investing? Trading can be seen as a kind of dynamic investing process, where instead of buying a possibly undervalued market and holding it while it grows, one might trade in and out of this process. Some markets in effect make you trade, as they do not tend to grow in this directional way, for example, alternatives like Forex. In these kinds of markets, the trader is following the ebb and flow of the valuation but guided by technical analysis and fundamental analysis, like the medium-term effect of interest rate differentials.
Trading is not necessarily that intelligent, as the volatility of day or swing trading markets can cut up any intelligent strategy. But traders arguably would like it to be. That is, to be able to analyse a market and have it conform to that analysis, with the caveat that market conditions change over time, requiring the analysis to be adjusted, but in a reasoned and derived manner. If the trader needs to exit early, then they do so for clear reasons.
But again the volatility of the market means that what traders tend to do is use targets of some kind to make the trade yes/no, so if it does not meet the target within a certain time, then exit, for example. Technical indicators can provide signals to make this process algorithmic, but not necessarily intelligent, with the caveat that they are the derivation of an intelligent search for repeating patterns in the market. Traders may then decide to let the algorithms trade on their behalf or to copy other traders who have deeper levels of complied market intelligence from experience. So can LLMs help make this process more intelligent?
Are LLMs intelligent?
The short answer would be no, they are not. They are very effective at Natural Language Processing (NLP) - a long-term goal of AI - and can make output from input that is relevant and seems human-like. Ultimately the algorithms are predicting what so next, but in a contextualised way.
This is why LLMs hallucinate, as they can make what is to them a perfectly good prediction, but which results in nonsense output. It's as if you responded to words by looking at a dynamic cheat sheet of what should come after the input from the other person, and sometimes mixed it up.
But the complexity of their representation of data, allied with improvements in their algorithms and hardware, means that the simulation of intelligence is becoming more and more effective, to the extent that everyday LLMs can seem like they pass the Turing Test (which is to say they pass it until they hallucinate or lose focus and stop seeming human-like).
Can LLMs help an intelligent being?
The short answer is yes, they can. It might be argued that they work best with a kind of two-to-tango approach, with the human and the LLM working together on some problem. So the question arises can an LLM help with trading? As we have seen, trading is not necessarily that intelligent.
An LLM can certainly recommend strategies and can analyse data and make inferences. It can draw on a huge database of every possible candlestick pattern. But even so, it may offer wrong information. So here the human trader needs to check what the LLM is telling them. But it can also brainstorm outcomes. A trader might ask an LLM if they should exit, and it may give a set of reasons why to do this or not to do it. However asking specific investing or trading-related questions about a market, may require an LLM trained in the specific domain and this can mean that the LLM will be less human-like and less effective in its reasoning.
It is the reasoning ability which can make LLMs stand out and if this is reduced due to a focus on domain specifics, then an LLM can seem more like the capacity to query a huge database. Ultimately it is up to the human to find ways to make an LLM gel with what it is they want to do. To a significant extent, this comes down to good prompts.
Prompts and trading
The prompt is the query asked of the LLM and the response is the reply given. It can be difficult to get this right or at least to be efficient and effective. One approach is to use multiple prompts to get the LLM to focus on the problem. However this itself is an art as too many prompts can dissipate focus as well.
The prompter may find that the first prompt should be more general and the follow-up one or two prompts would be more specific, in effect narrowing down the query but based on the response and the user's intent of the prompt. Intent can develop as well, based on the response. Adjusting intent can help widen the net where necessary, rather than becoming too specific.
Bear in mind that LLMs have limited dynamic memory, but they will generally be able to follow a thread within a specific query theme, up to a point, as the underlying algorithm is non-deterministic, thus unexpected outcomes can happen. LLMs sound conversational, but a conversation with an LLM is different from one with a human. The LLM is trying to help solve a problem expressed in the intent of the prompt, to the best of its ability, and needs your ongoing help (input) to do so. This is the core reason why LLMs work best as augmentation rather than replacement, especially for complex tasks.
Sample multi-turn prompts in trading
A turn is a query plus response so multi-turn is multiple queries and responses, in a thread, that is to say, a conversation. One might ask: "I am trading a market and want to know what to do next". Based on the response, which might be pretty general, the trader can narrow the prompt, to say for example: "If I do this what might happen", where this refers to some path suggested by the LLM.
Based on its response, the trader might ask "When this happens what should I do?", changing the prompt intent and the thread can continue as the trader gets a sense of potential outcomes and strategies for their trading plan, but ultimately they will make their own trading decision.
The follow-up could be made more specific by asking 'What strategy should I use", which will likely draw upon the LLM's dataset of trading strategies. All output needs to be verified, as LLMs can make mistakes, but sound very convincing as they do so.
Future LLMs
An LLM is relatively static, it is not designed to plug into the market and trade for you. Robots are designed to do this. Robots however are simplistic and templated. They use rules to determine how to manage a trade. That is, they are rule-based, but simplified and not adaptive. LLMs are not rule-based and are not simplified they are designed for complexity and are adaptive. The problem is that markets don't have rules to train on and an LLM cannot update its weights in the market in real-time to take account of the immense complexity and volatility of a trading market.
If they could, would they be any better than a human trader? Currently, LLMs are better than humans in some respects. They can provide almost instantaneous analysis which might take a person a lot longer. They can train on vast datasets that a human cannot. But they will have the same issue dealing with market information as humans, unless they can utilise their speed, like a combination of HFT and reasoning.
One suggestion, for now, is to use LLMs to help structure trading, that is like a kind of way to bypass rule-based trading (which is mostly what trading is), with fuzzier human + LLM structured and responsive trading. Does this suggest that a future LLM will do all this? Unless it is AGI, that is the 'real' future AI, then possibly not, but it can still be more like an assistant and support for the human being, intelligent or trying to be in the face of volatile, complex trading markets, like all of us.
Commentary and summary
LLMs have provided a way to have a human-like interaction with a computer system. LLMs operate by contextually predicting the next step to take. As such they are not intelligent and are limited by the predictive ability of the underlying algorithm. They are also limited by their training data, the complexity of the representation of this data, and hardware. LLMs are prone to error and their output needs to be verified.
Trading is a complex multi-faceted activity undertaken by intelligent human beings. Its focus is volatile markets which are prone to random, behaviour akin to white noise but sometimes like pink noise. The existence of trends and ranges provides the tradable structure that traders look out for, as indicated by tools such as technical indicators. Traders may also try and find tradable structures on short-term time frames, from scalping to High-Frequency Trading (HFT).
LLMs offer a way to assist the trader in their tasks, offering potential outcomes for a given scenario, for example. Future LLMs may provide a way to plug into the market and using their advantage of speed, try and find tradable structures on very short-term time frames. But for the time frames of human trades, they can offer assistance. In the future they may be able to look for patterns themselves and trade on a trader's behalf, taking copy trading and robots to new levels.
For now, traders who are not self-directed may rely on other traders (copy trading) or on robots, which are simple rule-based systems based on trading patterns complied from the market, typically via technical indicators. As non-rule-based systems, LLMs have current limitations for those looking for automation, but with effective use of prompts, they may even now offer ways for the trader to focus their attention on evolving trading strategies and decisions. This could help traders including those who plan and execute their trades, as well as automated traders.
As LLMs are good at analysing data (due to their underlying machine learning algorithms), they could help traders take better paths in any situation where data is relevant and available. Which is to say, they might be a way to support and enable trading strategies in situations where one might turn to rule-based robots. In the future (but not yet), they may be able to make up strategies on the fly, dynamically adapted to changing market conditions. And those are the key speculative takeaways from this article.
Pepperstone Case Study: Automation
- Minimum deposit: $200
- Online trading platforms: MT4, MT5, cTrader, TradingView
Custom LLMs are starting to appear at brokerages. However, CFD providers can be highly regulated and they have not appeared at the major names yet. This said, CFD providers are offering information about using LLMs in trading. Pepperstone is one such provider, which is aimed at experienced traders and regulated in different regions.
Pepperstone is potentially a good case study for automation in general, as it developed as a CFD provider focused on automated trading. CFD providers can be seen as dedicated to complex alternative markets, and indeed Pepperstone was initially what was termed a 'Forex broker'. However, it offers many more types of CFD markets to trade. All of Pepperstone's trading platforms can be used by automated traders, with Expert Advisors robots on MT4 and MT5, cBots on cTrader and Pine Script on TradingView.
Pepperstone caters to automated traders with rapid order processing, because many use robots as they can trade at frequencies that humans cannot, or would find difficult to sustain. This CFD provider is built around automation and offers platforms and accounts for those who trade complex markets and strategies.