In this edition of our jargon busting series we simplify five key technological buzzwords and technical vocabulary around Coding, such as Big Data and Open Source. Read on and confidently describe how search engines use the code from your site to create their page rankings through a process known as Scraping.
The acronym for 'Application Programming Interface', API is a defined set of functions and protocols written in code for accessing certain features and data of other applications and services. APIs are 'called' to request a certain set of data, usually because the data is shielded away for commercial and security purposes.
For example, to display details of a company registered in the United Kingdom on a website, one could call the Companies House API from your web server. The API would then search through Companies House's database and return the relevant result without any styling. Not only does this make development much easier for developers, as they do not need to know how the database is structured at all, but this also enables us to manipulate the received data in any way we want for display or calculation.
Simply put, big data is the processing of a very large, continuous set of unstructured data to make sense of all the data available. Such data could be gathered from social media (e.g. Twitter feeds, Facebook likes), web visits etc. Due to the large volume, variety and velocity of data stream coming from all around the world, very high-level societal patterns can be identified. This enhanced insight enables product developers to understand customer behaviour and demands, allowing them to tweak products to drive higher usage and sales. In the RegTech space, big data can be used to identify patterns in data that indicate fraud, enabling reporting to be done more easily and faster.
A term usually attached to software products in which its source code and blueprints are shared with the public around the world. While there are licences to limit undesirable use, sharing code encourages open collaboration within the developer community, increasing the speed of innovation and software development. If the software is good, making it open source would also boost the developers' profiles and encourage adoption. That's why many Silicon Valley giants have released their internal tools as open source projects in recent years.
Examples of successful open source models are TEDx and Wikipedia. Do you know the ubiquitous machine learning framework, Tensorflow, and many cryptocurrencies, such as Bitcoin and Ethereum, are also open sourced too?
In contrast to classical computing, where the smallest piece of information is either 0 or 1, quantum computing takes advantages of the abilities of subatomic particles to exist in one or more states at a time. By using particles for computing, called 'qubits', operations can be done faster with less energy. While quantum computing will come in useful for large simulations of climate change and mathematical calculations for example, many computation for home and office use do not require a quantum computer as it would actually take more energy and time.
The process of extracting a large amount of information on websites. Though it can mean simply downloading web pages using built-in tools in a browser, this term usually refers to a 'web spider' created in code crawling through a web page, extracting data and crawling through other websites when it meets hyperlinks. While scraping could mean a breach of website use, Google and many other search engines do exactly this to gather results for our search, in addition to ranking the results in certain orders.