Technology Wizards

Technology Wizards

Behind every great invention, there's a great mind working towards progress. From mobile technology medical inventions, these technology wizards have revolutionised the way we live.

Paul Lauterbur and Peter Mansfield's invention - magnetic resonance imaging - has transformed almost every area of surgery, enabling doctors to see inside a patient's body without cutting it open first.

"MRI has totally changed neurosurgery," says Nirit Weiss, assistant professor of neurosurgery at Mouth Sinai School of Medicine in New York. "If you open the skull and look at the brain, it looks like a blob - you can't just look at it and see the different cell groups. But MRI has allowed us to visualise the brain's structures so we have a map in our head of where to go and where to avoid."

Some revolutionaries were neglected by the scientific establishments of their time. For instance, Rosalind Franklin was excluded from sharing in the Nobel Prize for the discovery of the structure of DNA, despite her great contributions to the understanding of the molecular structures of DNA. In fact, she gave her life to the cause by exposing herself to massive amounts of radiation just to try to get the best possible X-ray photograph of a strand of DNA, which led her to die of cancer at the early age of 37. Her contribution made through the double helix provided the crucial evidence James Watson and Francis Crick needed to complete their model, and even so, neither scientist acknowledged her work when they received the Nobel Prize in 1962.

Another tech inventor that has changed the world is Tim Berners-Lee, credited with investing the World Wide Web in 1989. Upon designing and building the first Web browser, editor and server, he changed the way information is created and consumed.

Bill Gates also revolutionised the world today. He had an early interest in software and began programming computers at the age of thirteen. Later on, he founded Microsoft which became famous for their computer operating systems and killer business deals.

"I choose a lazy person to do a hard job because a lazy person will find an easy way to do it," Bill Gates said, in reference to the popular belief that inventors are lazy people who find a way to make their lives easier. "He also once said, "I failed in some subjects in exam, but my friend passed in all. Now he is an engineer in Microsoft and I am the owner of Microsoft." He has also been quoted saying: "Be nice to nerds. Chances are you'll end up working for one."

Still in the field of computers, Jack Kilby and Robert Noyce independently invented the single integrated circuit - the microchip - in 1959. This invention powered through the greatest obstacle to fast and more powerful computers. The microchip sparked a revolution in technological miniaturisation. Although Kilby was the one awarded with the Nobel Price, it was Noyce's silicon-based chips that became popular, founded Intel in 1968, which is today the largest manufacturer of semiconductors. That year, Kilby also invented the personal calculator.

Filmmaker George Lucas revolutionised special effects in the movies by pioneering motion control camera techniques and spearheading the computer-generated imaging revolution in the 1980s. This revolution had its roots in Lucas‘ ILM (Industrial Light and Magic), which he founded in 1975 to bring his vision of Star Wars to life.

"A special effect is a tool, a means of telling a story," Lucas said. "A special effect without a story is a pretty boring thing."

He has also revealed that "the secret to film is that it's an illusion."

Many has wondered where he got the inspiration from to revolutionise the film industry, he has stated: "As a kid, I read a lot of science fiction. But instead of reading technical, hard-science writers like Isaac Asimov, I was interested in Harry Harrison and a fantastic, surreal approach to the genre. I grew up on it. Star Wars is a sort of compilation of this stuff but it's never been put in one story before, never put down on film. There is a lot taken from Westerns, mythology, and samurai movies. It's all the things that are great put together. It's not like one kind of ice cream but rather a very big sundae."

Simulation technology to predict refugee crisis

Simulation technology to predict refugee crisis

A new computer simulation of refugees' journeys when they flee major conflicts can correctly predict more than 75% of their destinations, and may become a vital tool for governments and NGOs to contribute to allocate humanitarian resources more effectively and at strategic points.

Researchers at Brunel University London - Diana Suleimenova, Dr David Bell and Dr Derek Groen - from the Department of Computer Science, used publicly available refugee, conflict and geospatial data to construct simulations of refugee movements and their potential destinations for African countries.

The data-driven simulation tool was able to predict at least 75 percent of refugee destinations correctly after the first 12 days for three different recent African conflicts. It also proved to be more accurate than established forecasting techniques (‘naïve predictions') to forecast where, when and how many refugees are likely to arrive, and which camps are likely to become full and need a higher number of resources and assistance. These results were published in Scientific Reports.

The research team created their simulations for Burundian crisis in 2015, which took place after Pierre Nkurunziza attempted to become president for a third term; the Central African Republic (CAR) crisis in 2013, triggered when the Muslim Seleka group overthrew the central government; and the Mali civil war in 2012, which was caused by insurgent groups campaigning for independence of the Azawad region.

The team relied on open data resources to both enable these simulations and validate their accuracy. These sources included refugee registration data from the United Nations High Commissioner for Refugees (UNHCR), conflict data from the Armed Conflict Location and Event Data Project and geographic information from Microsoft Bing Maps.

While not all refugee movements are accurately predicted in these simulations, their approach emulated the key refugee destinations in each of the three conflicts, thus it can be re-applied to simulate other conflict situations reported on by the UNHCR.

For instance, in Burundi, the simulation correctly predicted the largest inflows in Nyarugusu, Mahama and Nakivale throughout the conflict's early stages. Meanwhile, the simulation correctly reproduced the growth pattern in East camp of Cameroon, as well as the stagnation of refugee influx into Chad's camps. In Mali, the simulation accurately predicted trends in the data for both Mbera and Abala, which put together account for around three-quarters of the refugee population.

The researchers used a new-agent based modelling programme named Free, which was revealed to the public with the publication of their paper. Although agent-based modelling has been used more widely to study population movements, and has become a prominent method to explain migration patters, this is the first time it has been used to predict the destinations of refugees fleeing conflicts in the African continent.

Suleimenova, Bell and Groen explain in Scientific Reports that their simulation is not directly tailored to these conflicts, but a ‘generalised simulation development approach' which can forecast the distribution of refugee arrivals across camps, given a particular conflict scenario and a total number of expected refugees.

This simulation development approach allow organisations to quickly develop simulations when a conflict occurs, and enables them to investigate the effect of border closures between countries and forced redirection of refugees across camps. It also serves of assistance to define procedures for collecting data and validating simulation results, aspects which are usually not covered when presenting a simulation model on its own.

According to the authors, "Accurate predictions can help save refugees' lives, as they help governments and NGOs to correctly allocate humanitarian resources to refugee camps, before the (often malnourished or injured) refugees themselves have arrived. To our knowledge, we are the first to attempt such predictions across multiple major conflicts using a single simulation approach."

The authors also urge greater investment in the collection of data during conflicts and they explain what this is important and what it's hard to get. "Empirical data collection during these conflicts is very challenging, in part due to the nature of the environment and in part due to the severe and structural funding shortages of UNHCR emergency response missions. Both CAR and Burundi are among the most underfunded UNHCR refugee response operations, with funding shortages of respectively 76 and 62%".

With record levels of 22.5 million refugees on a global scale, "more funding for these operations is bound to save human lives, and will have the side benefit of providing more empirical data – enabling the validation of more detailed prediction models."

The research group aims at collaborating with humanitarian organisations, adapting their technology to help specific humanitarian efforts, and to further reduce the time of development by automating the creation of these simulations.

'A generalized simulation development approach for predicting refugee movements' by Diana Suleimenova, David Bell and Derek Groen (Department of Computer Science, Brunel University London) is published in Scientific Reports.

Google’s DeepMind: Advance in AI

Google’s DeepMind: Advance in AI

Acquired by Google in 2014, DeepMind is a British artificial intelligence company founded in September 2010. The company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural turing machine, a network that may be able to access an external memory like a conventional turing machine, resulting in a computer that imitates the short-term memory of the human brain.

The company became famous in 2016 after its AlphaGo program beat a human professional Go player for the first time, and made headlines again after beating Lee Sedol, the world champion in a five game tournament.

Google's DeepMind has made another big advance in artificial intelligence by getting a machine to master the Chinese game of Go without help from human players. Although AlphaGo started by learning from thousands of games played by humans, the new AlphaGo Zero began with a blank Go board and no data bar the rules. After learning the rules, AlphaGo Zero played itself. Within 72 hours it was good enough to beat the original program by 100 games to zero.

DeepMind's chief executive, Demis Hassabis, said the system could now have more general applications in scientific research. "We're quite excited because we think this is now good enough to make some real progress on some real problems even though we're obviously a long way from full AI," he said.

The software defeated leading South Korean Go player Lee Se-don by four games to one last year in a game where there are more possible legal board positions than there are atoms in the universe. AlphaGo also defeated world's number one Go player, China's Ke Jie.

Go is an abstract strategy board game for two players, in which the goal is to surround more territory than the opponent. The game was invented in China over 2,500 years ago, and thus, it's believed to be the oldest board game that is still played today. The rules are simpler than those of chess and the player usually has a choice of 200 moves throughout the game, compared with about 20 in chess. Top human players usually rely on instinct to win.

The achievements of AlphaGo required the combination of vast amounts of data - records of thousands of games - and a vast computer-processing power.

David Silver, lead researched on AlphaGo, said the team took a different approach with AlphaGo Zero. "The new version starts from a neural network that knows nothing at all about the game of Go," he explained. "The only knowledge it has is the rules of the game. Apart from that, it figures everything out just by playing games against itself."

While AlphaGo took months to get to the point where it could take on a professional, AlphaGo Zero got there in just three days, and only using a fraction of the processing power.

"It shows it's the novel algorithms that count, not the computer power or the data," says Mr Silver.

He highlighted an idea that some may find scary: in just a few days a machine has surpassed the knowledge of this game acquired by humanity over thousands of years.

"We've actually removed the constraints of human knowledge and it's able, therefore, to create knowledge itself from first principles, from a blank slate," he said.

While AlphaGo learned from and improved upon human strategies, AlphaGo Zero devised techniques which the professional player who advised DeepMind admitted had never seen before. It is able to do this by using a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself.

Many of the research team have now moved on to new projects where they want to apply the same software to new areas. Demis Hassabis stated that some areas of interest include drug design and the discovery of new materials.

Some might see AI as a threat, but Hassabis looks into the future with optimism. "I hope these kind of algorithms will be routinely working with us as scientific experts medical experts on advancing the frontiers of science and medicine - that's what I hope," he says.

Nonetheless, he and his colleagues are aware of the dangers of applying AI techniques to the real world at a fast pace. A game with clear rules and no element of luck is one thing, but the random real world is another.

China pioneers next revolution in mobile tech

China pioneers next revolution in mobile tech

The fifth generation of mobile connection is just around the corner. Just like it happened with 4G, 5G will eventually become the leading mobile technology. The only difference is that this time it's not the U.S. or Japan, but China that is pioneering the cutting edge of mobile technology.

According to a report published by CSS Insights, 1 billion people will be using 5G connections by 2023. The mobile industry analysts forecasts that China will account for half of all 5G users by 2022. The report predicts that China will maintain a sizeable hold until 2025, accounting for 40 percent of global 5G connections that year. This adoption is expected to take place faster than 4G, but several factors might hinder its progress.

"China will dominate 5G thanks to its political ambition to lead technology development, the inexorable rise of local manufacturer Huawei, and the breakneck speed at which consumers have upgraded to 4G connections in the recent past," Marina Koytcheva, VP Forecasting at CCS Insight, told CNBC.

According to the report, China will take the lead in 5G users, while Japan, the U.S. or South Korea will launch the first commercial 5G network. Meanwhile, Europe is expected to trail behind by at least a year.

Although 1 billion people are expected to use5G by 2023, the report doesn't foresee the new mobile generation having a dramatic presence in the Internet of Things (IoT). There are no clear expectations on how it will affect autonomous cars, and CSS states that such "mission critical" services will "have to wait even longer to come to the fore."

CSS cautions there are still some uncertainties pertaining how and where network operators will deploy vast numbers of new base stations, the lack of clear business case for operators, and consumers' willingness to upgrade their smartphones. It all depends on users buying new devices that take advantage of 5G. Otherwise there's no point in continuing investing in it. Meanwhile, Europe is expected to face its own challenges, stemming from market fragmentation, the availability of spectrum, and the influence of regulators.

According to the forecast, mobile broadband access on smartphone will be the principal area of 5G adoption, representing a colossal 99 percent of total 5G connections by 2025.

Kester Mann, Principal Analyst, Operators at CSS Insight said: "The unrelenting hype that has surrounded 5G for several years has seen a diverse range of applications put forward as the main drivers of adoption. Some of them will be relevant at different times of the technology's development, but the never-ending need for speed and people's apparently limitless demand for video consumption will dominate 5G networks."

However, CSS Insight sees fixed wireless access as 5G's first commercial application. The report forecasts that the US will be an early adopter, boosted by leading advocates like AT&T and Verizon. However, the long-term opportunity will remain small and the report expects it to represent only a small fraction of total connections.

Although the industry is apparently obsessed with everything being connected in the future, 5G will account for a relatively low number of connections in the Internet of Things (IoT) during the forecast period. 4G will fill the gap and will continue to satisfy demand until narrowband technology is fully supported within the 5G standard. Network operators have only just begun investing in LTW technologies such as NB-IoT and Cat-M to support devices that have life spans of several years. According the report, significant numbers of 5G connections in this area are unlikely before the second half of the 2020s.

Other services, the so-called "mission critical" services, such as autonomous driving - regularly touted as a "killer" application in 5G - will have to wait even longer to come to the fore.

Geoff Blaber, VP Research, America at CCS Insight comments: "5G is about creating a network that can scale up and adapt to radically new applications. For operators, network capacity is the near-term justification; the Internet of Things (IoT) and mission-critical services may not see exponential growth in the next few years but they remain a central part of the vision for 5G. Operators will have to carefully balance the period between investment and generating revenue from new services."