Bond University marked the 30th anniversary of its founding in 2019. Five of the university’s leading academics predict how their areas of expertise will change in the three decades leading up to the 60th anniversary in 2049.
In 2049, will we use technology for the global good, to improve our lives, reduce poverty, spread education, eradicate disease, and reduce human impact on the environment?
Or will technology have mastered us, through a violent uprising of the machines, or hostile takeover and manipulation of our personal data?
Will complacency by policymakers, and lack of education and engagement by the broader community, have led to reactive and incremental adjustment of our legal systems, regulatory frameworks, and social norms to accommodate new and emerging technologies, resulting in missed opportunities to harness their full benefits?
Will the gaps between the haves and have nots been deepened by the technological divide?
For some of us technology is exhilarating. For others, it is bewildering, disempowering, or even terrifying.
Many people are simply disengaged from technology, considering it to be out of their reach economically, technically, or temporally.
Yet all those viewpoints need to be represented in discussions about how - and even if – technological advances should be implemented and regulated.
The ‘how’ and ‘if’ conversations are hard. They require us to peer into not only our own futures, but the futures of others, to make ethical judgements about what our communities are likely to value.
It is far easier to focus instead on what affects us immediately and let the future take care of itself.
But if we don’t attempt to explore those ethical challenges, and plan our responses to them, we not only risk thoughtlessly surrendering something whose full value can only be appreciated once it has been irrevocably alienated, like privacy, but also failing to fully exploit the potential benefits of technology.
Sometimes we will get it wrong. But we stand a far better chance of getting it right if we engage in proactive debates about how to regulate technology and ensure its benefits and risks are equitably distributed, rather than relying on reactive measures to harness and rein in technology once it pushes the boundaries too far.
In the early 1800s, textile workers in the north of England resorted to violence to undermine mechanisation of the textiles industry.
The Luddites, as they became known, feared that technological advances would remove the need for their labour, and they were unable to see how they could fit into a new technology-based manufacturing industry. Similar concerns were expressed by employees in industries which were early adopters of computer and robotics technologies.
With the benefits of hindsight, we know that those fears were both true and untrue: disruptive technologies did spell the end of certain occupations. However, implementation of those technologies resulted in the creation of newer and different occupations.
Many of those newer occupations required higher levels of education and technological literacy. And yes, some people did get left behind.
The critical challenge, therefore, lies in ensuring people remain sufficiently engaged with technology and education to be able to adapt to emerging technologies.
Bond students, by virtue of their educational opportunities, and particularly their aptitude for leadership and innovation, are well positioned to contribute to and direct discussions around this technological step-up.
Equipped with the skills to critically assess both the opportunities and the challenges, graduates and the broader Bond community can make meaningful contributions to help create the world they want to live in.
Technology is neither inherently good or evil. Determining which way the scales tip lies with us.