We don’t as a general rule use these pages to highlight what we are doing at Trackunit, but I thought that given the amount of headline-hogging “enjoyed” in the press by artificial intelligence recently, I’m going to make an exception and state our position.
But first, a word about those headlines. Here’s one from The Times with the warning, ‘Two years to save the world, says AI adviser’ on June 6. Another front-page headline on May 31 was perhaps the even more apocalyptic ‘AI pioneers fear extinction’. That’s pretty stark, isn’t it? And the concerns expressed in both those pieces by serious experts in their field are valid.
But there’s also another side. And that other side is represented by the best aspects of OpenAI and its next-generation version of the AI technology that underpins its viral chatbot tool, ChatGPT. GPT-4 — and I say this as a not easily-impressed software advocate and veteran — is truly astonishing.
We thought the earlier versions of ChatGPT were pretty mind-blowing but this latest incarnation unveiled in March 2023 has gone way beyond an ability to manipulate text (vastly improved too in quantity, quality and range) to having functionality that in one case enabled it to score in the top 10% for a simulated law school bar exam and in another, offered nutritional advice that any doctor would probably endorse.
The potential for construction is enormous and likely to enjoy exponential leaps forward with each new generation and with competition too from the likes of Google and Microsoft. But there are limits. And the legitimate concerns outlined by experts are real too. Our stance is that AI has risks, but we don’t want fear to block experimentation as that’s an immediate impediment to progress and, historically, has never worked anyway. It’s why we have taken the following steps in our AI policy.
First and foremost, we’ve given our temporary approval for the use of AI generative models like ChatGPT for text generation, image generation and the combined process of code review and generation.
That might not sound like a big deal given many of out there will probably at least have dabbled in it given all the hype, but there are other companies who’ve prohibited or at least limited it’s use in the workplace including the likes of Apple, Samsung, JP Morgan Chase, Accenture and Amazon.
“We’re at the cutting edge of technology and it’s our responsibility to embrace innovation, especially one that is going to change the game decisively.”
It’s not for me to comment on their decision. But for Trackunit, our choice is centered across all departments around the way we become smarter when we write code and improve quality and innovation.
There are good reasons for this. We’re at the cutting edge of technology and it’s our responsibility to embrace innovation, especially one that is going to change the game decisively. Whatever the technology is doing now, it’s fair to suggest each new leap will take us to new vistas. It’s exciting stuff and it places a premium on change management that will become ever more evident as the pace of change gathers speed.
But with excitement comes caution. We have to have checks. You’ll have noted I said temporary previously and that is because while we’re embracing AI, we’re also taking it step by step in our efforts to be sure. In essence, we will continue to evaluate and make sure the decision is right.
That means we have the human element as the bridge and check in all our ChatGPT interactions. It’s the human that is 100% responsible for generated text, content, images and all code. What that means is that we have not hooked up our infrastructure to any external AI engine. There’s a human in between. Always.
This disconnect between the two systems is very important. If we don’t know where the data is going, then we potentially lose control as to how that data is being used and we won’t compromise on that.
This is the crux of the issue here. AI actually thinks and acts like a human brain and just as you have unethical humans, you can have unethical AI. It’s about how you train it, the questions you ask, and the way you ask the questions. And having that human divide between our data and any external AI engine is the way we keep control of the process and guide it towards choices that fit with our values and ethics. Used cynically and for gain, then the kind of algorithms that propelled some members of US society along the path of thinking storming the US Congress in January 2021 was a good idea win the day. That’s not something we want.
With the principle of human involvement firmly established, we have approved using generative AI for research and experimentation. Without in any way diminishing the principle of control through a human presence, we have to be able to experiment in laboratory-like conditions in lots of little ways to discover what the future might hold for construction. Then we can potentially adapt it to industry needs to make it better, more sustainable and safer while developing the industry-wide push to eliminate downtime.
No great leap has ever happened without trying things out and that is exactly the principle we are seeking to uphold here. It’s already reported that GPT-4 will impact the US workforce to varying degrees and you can bet your bottom dollar that construction will also be impacted by the ground-breaking technology.
“Change is a constant in business and we’re quite possibly about to enter one of the biggest transformations in business in recent history.”
This is not a bad thing. While it might mean some job profiles change, it may through automation of processes in ways we might not have imagined possible a few years ago and in a short time frame, free up resource to work in better, more creative ways. Change is a constant in business and we’re quite possibly about to enter one of the biggest transformations in business in recent history. Management is the key and those who manage it well will benefit the most.
There’s also low-hanging fruit to be had here. Lead generation, product improvements through an integration of stronger AI language and R&D augmentation could see research delivery halved. And that’s just the tip of the iceberg.
If you pull back the curtain a little further, the potential for real disruption is there, particularly when it comes to core product offerings. It’s not hyperbole to suggest there could and will be products on the market across all sorts of industries that will be transformative. It’s why, while we wait for the regulators to catch up with the pace of developments, we have the checks in place to make sure that we at least are taking an ethical, moral stance.
It follows then that we have put in place a secondary verification process not unlike two-factor authentication with peer review firmly established to make sure that we’re not transgressing the principles of balance and checks that will underpin our AI policy going forward.
We’re responsible for a lot of sensitive and sometimes personal data and have therefore specifically prohibited ChatGPT to engage in any fact-finding without secondary verification. This is very much aligned with rule #2 as we prohibit putting any sensitive personal or customer data into ChatGPT without a human approval. And that rule is subject to review, refinement and improvement on an ongoing basis as we take each careful step in to this new technology.
I hope a lot of what I’ve said here comes across as common sense. While AI can be misused, we do owe it to ourselves to explore its possibilities as much as we can.
But don’t do this because it seems cool or edgy. Only with the right intent and with the right purpose, will this really have a profound impact to your business.
Just don’t leave your rigor at the door. You’ll need it.
Never miss an insight. We’ll email you when new articles are published on this topic.