Data Science & Data Engineering Glossary

Industry Terms

Up your data literacy and learn more about the data science and data engineering industry terms you’ll see most often in the field with this comprehensive glossary. You can download a full pdf of this glossary here.

A

A/B Testing

A statistical way of comparing two (or more)  techniques, typically an incumbent against a new rival. A/B testing aims to determine not only which technique performs better but also to understand whether the difference is statistically significant. A/B testing usually considers only two techniques using one measurement, but it can be applied to any finite number of techniques and measures.

Accuracy
In classification, accuracy is defined as the number of observations that are correctly labeled by the algorithm as a fraction of the total number of observations the algorithm attempted to label. Colloquially, it is the fraction of times the algorithm guessed “right.”

Algorithm
A series of repeatable steps for carrying out a certain type of task with data. 

AngularJS
An open-source JavaScript library maintained by Google and the AngularJS community that lets developers create what are known as Single [web] Page Applications. AngularJS is popular with data scientists as a way to show the results of their analysis. 

Anomaly Detection
Anomaly detection, also known as outlier detection, is the identification of rare items, events, observations, or patterns which raise suspicions by differing significantly from the majority of the data.

API
Click here to learn more about APIs, the benefits to using them, the drawbacks and more.

Artificial Intelligence (AI)
Click here to learn what AI means, the benefits, drawbacks and more.

Artificial Neural Network (ANN)
Click here to learn what ANNs are, the benefits, the drawbacks and more.

B

Backtesting
Periodic evaluation of a trained machine learning algorithm to check whether the predictions of the algorithm have degraded over time. Backtesting is a critical component of model maintenance.

Baseline
A model or heuristic used as reference point for comparing how well a machine learning model is performing. A baseline helps model developers quantify the minimal, expected performance on a particular problem. Generally, baselines are set to simulate the performance of a model that doesn’t actually make use of our data to make predictions. This is called a naive benchmark.

Batch
A set of observations that are fed into a machine learning model to train it. Batch training is a counterpart to online learning, in which data are fed sequentially instead of all at once.

Bayes’ Theorem
Also, Bayes’ Rule. An equation for calculating the probability that something is true if something potentially related to it is true. 

Bias
Click here to understand what bias is, the types of bias and risks.

Big Data
Click here to understand what Big Data is, why it’s important and the types of Big Data.

Binomial Distribution
A distribution of outcomes of independent events with two mutually exclusive possible outcomes, a fixed number of trials, and a constant probability of success. 

C

Classification
Click here to understand what Classification is, the types, benefits and drawbacks.

Cloud Computing
Click here to learn what cloud computing means, how it works, the benefits and more!

Clustering
Click here to learn what clustering is, how it works, the benefits and the drawbacks.

Coefficient
A number or algebraic symbol prefixed as a multiplier to a variable or unknown quantity (Ex.: x in x(y + z)6 in 6ab.

Computational Linguistics
Also, natural language processingNLP. A branch of computer science for parsing text of spoken languages to convert it to structured data that you can use to drive program logic. Early efforts focused on translating one language to another or accepting complete sentences as queries to databases; modern efforts often analyze documents and other data to extract potentially valuable information.

Confidence Interval
A range specified around an estimate to indicate margin of error, combined with a probability that a value will fall in that range. The field of statistics offers specific mathematical formulas to calculate confidence intervals.

Continuous Variable
A variable whose value can be any of an infinite number of values, typically within a particular range.

Correlation
The degree of relative correspondence, as between two sets of data. 

Covariance
A measure of the relationship between two variables whose values are observed at the same time; specifically, the average value of the two variables diminished by the product of their average values.

Cross-Validation
The name given to a set of techniques that split data into training sets and test sets when using data with an algorithm. The training set is given to the algorithm, along with the correct answers (labels), and becomes the set used to make predictions. The algorithm is then asked to make predictions for each item in the test set. The answers it gives are compared to the correct answers, and an overall score for how well the algorithm did is calculated. Cross-validation repeats this splitting procedure several times and computes an average score based on the scores from each split.

D

Data Cleansing
The act of reviewing and revising data to remove duplicate entries, correct misspellings, add missing data and provide more consistency.

Database
A database is a structured storage space where the data is organized in many different tables in a way such that the necessary information can be easily accessed and summarized. Databases are mostly used with a relational database management system (RDBMS) such as Oracle or PostgreSQL. The most common programming language used to interact with the data from a database is SQL.

Database Management System (DBMS)
A database management system is a software package used to easily perform different operations on the data: accessing, manipulating, retrieving, managing, and storing the data in a database. Based on the way the data is organized and structured, there are different types of DBMS: relational, graph, hierarchical, etc.. Some examples of DBMS: Oracle, MySQL, PostgreSQL, Microsoft SQL Server, MongoDB.

Data Dictionary
A set of information describing the contents, format, and structure of a database and the relationship between its elements, used to control access to andmanipulation of the database.

Data Engineering
Click here to understand what data engineering is, why it’s important and the process. And make sure you apply to our data science & engineering bootcamp while you’re here.

Data Exhaust
The data that a person creates as a byproduct of a common activity—for example, a cell call log or web search history.

Data Feed
A means for a person to receive a stream of data. Examples of data feed mechanisms include RSS or Twitter.

Data Governance
The planning, oversight, and control over management of data and data-related sources. Data governance sets the roles, responsibilities, and processes for ensuring data availability, relevance, quality, usability, integrity, and security. Data governance includes a governing body, a framework of rules and practices to meet the company’s information needs, and a program to perform these practices.

Data Lake
Click here to learn what a data lake is, how they work, the benefits, drawbacks and more.

Data Literacy
Data literacy is the ability to read, write, analyze, communicate, and reason with data to make better data-driven decisions.

Data Mart
The access layer of a data warehouse used to provide data to users.

Data Migration
The process of moving data between different storage types or formats, or between different computer systems.

Data Mining

Click here to learn what data mining is and how it works.

Data Model, Data Modeling
An agreed upon data structure. This structure is used to pass data from one individual, group, or organization to another, so that all parties know what the different data components mean. Often meant for both technical and non-technical users.

Data Profiling
The process of collecting statistics and information about data in an existing source.

Data Quality
The measure of data to determine its worthiness for decision making, planning or operations.

Data Replication
The process of sharing information to ensure consistency between redundant sources.

Data Science

Click here to learn how data science works and how it’s applied.

Data Scientist
Data Scientists investigate, extract, and report meaningful insights in the organization’s data. They communicate these insights to non-technical stakeholders, and have a good understanding of machine learning workflows and how to tie them back to business applications. They work almost exclusively with coding tools, conduct analysis, and often work with big data tools.

Dataset
A dataset is a collection of data of one or many types representing real-life or synthetically generated observations, and used for statistical analysis or data modeling.

Data Steward
A person responsible for data stored in a data field.

Data Structure
A specific way of storing and organizing data.

Data Visualization
Click here to learn why data visualization works, the types and why it’s important.

Data Warehouse
A place to store data for the purpose of reporting and analysis.

Data Wrangling
The process of transforming and cleaning data from raw formats to appropriate formats for later use. Also called data munging.

Decision Trees
Click here to understand decision trees, types, applications and benefits.

Deep Learning
Click here to learn how deep learning works, the types and the use cases.

Dependent Variable
The value of a dependent value “depends” on the value of the independent variable.

E

Errors at Random
Errors-at-random are data errors such as missing or mismeasured data that are random with respect to the data we observe. Errors are not-at-random if the probability that an observation is missing or erroneous is correlated with the observed data. Errors-not-at-random are especially problematic if errors are correlated with labels.

ETL
Click here to learn what ETL is, how it works, the benefits and the drawbacks.

Expert System
An expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems processing data describing the context of the decision  being made and applying logic, mainly in the form of if-then rules.

G

GATE
“General Architecture for Text Engineering,” an open source, Java-based framework for natural language processing tasks.

Gradient Boosting
Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.

H

Hadoop
Click here to explore Hadoop, it’s relationship with Big Data, the ecosystem and it’s benefits.

Hyperparameter
Hyperparameters are attributes pertaining to a machine learning model whose value is set manually before starting the training process. Unlike the other parameters, hyperparameters cannot be estimated or learned directly from the data. 

Hive
Hive is a data warehouse software project built on top of Hadoop for providing data query and analysis. Hive gives a SQL-like interface to query data stored in  various databases and file systems that integrate with Hadoop.

I

Imputation
Imputation is the process of filling in missing values in a dataset. Imputation techniques can be either statistical (mean/mode imputation) or machine learning techniques (KNN imputation).

Internet of things (IoT)
The Internet of things (IoT) is the extension of internet connectivity into physical devices and everyday objects. Embedded with electronics, internet connectivity, artificial intelligence, and other forms of hardware, these devices can communicate and interact with others over the Internet, and they can be remotely monitored and controlled.

J

Java
Java is a general-purpose, object-oriented, compiled programming language. While it is not among the most common languages used by data scientists, it and its close relative Scala are the native language of many distributed computing frameworks such as Hadoop and Spark.

JavaScript
A scripting language (no relation to Java) originally designed for embedding logic in web pages, but which later evolved into a more general-purpose development language.

K

K-Means
K-Means is the most popular clustering algorithm that identifies K cluster centers (called centroids) with tentative coordinates in the data and iteratively assigns each observation to one of the centroids based on its features until the centroids converge. Data points are similar inside a cluster and different from the data points in the other clusters.

K-Nearest Neighbors (kNN)
K-nearest neighbors are supervised learning algorithms that classify observations based on their similarity to their nearest neighbors. The most important parameters of KNN that can be tuned are the number of nearest neighbors and the distance metric (Minkowski, Euclidean, Manhattan, etc.).

L

Label
In supervised learning applications, labels are the components of the data that indicate the desired predictions or decisions we would like the machine learning algorithm to make for each observation we pass into the algorithm. Supervised learning algorithms learn to use other features in the data to predict labels so that these algorithms can learn to predict labels in other instances when the labels are not known or determined. In certain fields, labels are called targets. See also supervised learning, classification, regression.

Leakage
Leakage is the introduction of information during training that will not be germane or available to the deployed algorithm.

Length
Length measures the number of observations in our dataset.

Linear Regression
Click here to learn what linear regression is, how linear regression works and examples.

Linear Relationship
The relationship between two varying amounts, such as price and sales, that can be expressed with an equation that can be represented as a straight line on a graph.

M

Machine Learning
Click here to understand Machine Learning, how it works, the types and what industries use Machine learning.

Machine Learning Model
The model artifact that is created in the process of providing a machine learning algorithm with training
data from which to learn.

MapReduce
MapReduce is a programming model and implementation designed to work with big data sets in parallel on a distributed cluster system.  MapReduce programs consist of two steps. First, a map step takes chunks of data and processes it in some way (e.g. parsing text into words). Second, a reduce step takes the data that are generated by the map step and performs some kind of summary calculation (e.g. counting word occurrences). In between the map and reduce step, data move between machines using a key-value pair system that guarantees that each reducer has the information it needs to complete its calculation (e.g. all of the occurrences of the word “Python” get routed to a single processor so they can be counted in aggregate).

MATLAB
A commercial computer language and environment popular for visualization and algorithm development

Mean Absolute Error
Also, MAE. The average error of all predicted values when compared with observed values

Mean Squared Error
Also, MSE. The average of the squares of all the errors found when comparing predicted values with observed values.

Minimum Viable Product (MVP)
The minimum viable product is the smallest complete unit of work that would be valuable in its
own right, even if the rest of the project fizzled out.

Model
The specification of mathematical or probabilistic
relationships existing between different variables.
Because “modeling” can mean many things, the
term “statistical modeling” is often used to more
accurately describe the kind of modeling that data
scientists do.

N

Naive Bayes
A classification algorithm that predicts labels from data by assuming that the features of the data are statistically independent from each other. Due to this assumption, Naive Bayes models can be easily fit on distributed systems.

Natural Language Processing (NLP)
Natural Language Processing (NLP) is a branch of data science that applies machine learning techniques to help machines learn to interpret and process textual data consisting of human language. Applications of NLP include text classification (predicting what type of content a document contains), sentiment analysis (determining whether a statement is positive, negative, or neutral), and translation. NLP also comprises techniques to encode textual content numerically to use in machine learning applications.

Neural Network
A machine learning method modeled after the brain. This method is extremely powerful and flexible, as it is created from an arbitrary number of artificial neurons that can be connected in various patterns appropriate to the problem at hand, and the strength of those connections are adjusted during the training process. They are able to learn extremely complex relationships between data and output, at the cost of large computational needs. They have been used to great success in processing image, movie, and text data, and any situation with very large numbers of features.

Normal Distribution
Also, Gaussian distribution. A probability distribution which, when graphed, is a symmetrical bell curve with the mean value at the center. The standard deviation value affects the height and width of the graph.

NoSQL
A database management system that uses any of several alternatives to the relational, table-oriented model used by SQL databases. Originally meant as “not SQL,” it has come to mean something closer to “not only SQL” due to the specialized nature of NoSQL database management systems. These systems often are tasked with playing specific roles in a larger system that may also include SQL and additional NoSQL systems.

O

Online Learning
Online learning is a learning paradigm by which machine learning models may be trained by passing them training data sequentially or in small groups (mini-batches). This is important in instances where the amount of data on hand exceeds the capacity of the RAM of the system on which a model is being developed. Online learning also allows models to be continually updated as new data are produced.

Open Source
Open source refers to free-licensed software and resources available for further modifications and sharing.

Outlier
An outlier is an abnormal value in a dataset that deviates considerably from the rest of the observations.

Overfitting
Overfitting refers to when a model learns too much information from the training set including potential noise and outliers. As a result, it becomes too complex, too conditioned on the particular training set, and fails to adequately perform on unseen data. See variance.

P

Pandas
Click here to learn more about what pandas does, it’s benefits, drawbacks and takeaways.

Perl
An older scripting language with roots in pre-Linux UNIX systems. Perl has always been popular for text processing, especially data cleanup and enhancement tasks.

Pig
Apache Pig is a high-level platform for creating programs that run on Hadoop. Pig is designed to make it easier to create data processing and analysis  workflows that can be executed in MapReduce, Spark, or other distributed frameworks.

Pivot Table
Pivot tables quickly summarize long lists of data, without requiring you to write a single formula or copy a single cell. But the most notable feature of pivot tables is that you can arrange them dynamically.

Precision
A performance measure for classification models. Precision measures the fraction of all of the observations that a classification algorithm flagged positively that were flagged correctly. For example, if our algorithm were judging suspects, precision would measure the percentage of all the suspects declared guilty by the algorithm who actually were guilty. See also recall.

Predictive Analytics
Click here to learn how predictive analytics works, modeling approaches and techniques for building models.

Predictive Modeling
The development of statistical models to predict
future events.

Python

Click here to learn the history of python, how it’s used and examples.

R

R

Click here to learn the history of R, what it’s used for, examples and benefits.

Random Forest
Random Forest is a supervised learning algorithm used for regression or classification problems, random forest combines the outputs of many decision trees in a single model.

Recall
A performance measure for classification models. Recall measures the fraction of all of the observations that a classification algorithm should have flagged positively that were actually flagged by the algorithm. 

Regression
Regression is one of the two major types of  supervised learning models in which the labels we train the algorithm to predict are ordered quantities like prices or
numerical amounts.

Reinforcement Learning (RL)
Reinforcement learning (RL) is a stand-alone branch of machine learning (neither supervised nor unsupervised) where an algorithm gradually learns by interacting with an environment.

Relational Database
A relational database is a type of database that stores data in several tables related to one another by means of unique IDs (keys) from which the data can be accessed, extracted, summarized, or reassembled in different ways.

Root Mean Square Error
The root mean squared error (RMSE) is the square root of the mean squared error. This evaluation metric is more intuitive than MSE because the result can be understood easier, using the same units of measurement as the original data.

Ruby
A scripting language that first appeared in 1996. Ruby is popular in the data science community, but not as popular as Python, which has more specialized libraries available for data science tasks.

S

SAS
A commercial statistical software suite that includes a programming language also known as
SAS.

Scala
Scala is a Java-like programming language commonly used by data scientists. It is the native language of Spark.

Scikit-Learn
Click here to learn more about scikit-learn, how it’s used and the pros and cons.

Shell
A computer’s operating system when used from the command line. Along with scripting languages such as Perl and Python, Linux-based shell tools (included and available for Mac and Windows computers) such as grep, diff, split, comm, head and tail are popular for data wrangling. A series of shell commands stored in a file that lets you execute the series by entering the file’s name is known as a shell script.

Simpson’s Paradox
Simpson’s paradox is a phenomenon in which a trend appears in several different groups of data but disappears or reverses when these groups are combined.

Spark
Click here to learn more about what Spark is, it’s features, the benefits and use cases.

SQL
Click here to learn more about what SQL can do, it’s benefits, drawbacks and key takeaways.

Stata
A commercial statistical software package commonly used by academics, particularly in the social sciences.

Supervised Learning
Click here to learn more about what supervised learning means, why it’s important and examples. See also unsupervised  learning, machine learning.

T

Tableau
A commercial data visualization package often used in data science projects.

Time Series Data
A time series is a sequence of measurements of some quantity taken at different times, often but not necessarily at equally spaced intervals.

U

Underfitting
Underfitting is when a model is unable to detect the patterns from the training set because it was built on insufficient information. As a result, the model is too simple and cannot perform well on unseen data, nor the training set itself. Underfitted models have high bias.

Unstructured Data
Unstructured data is any data that does not fit a predefined data structure such as the typical row-column structure of a database. Examples of such data are images, emails, text documents, videos and audio.

Unsupervised Learning
Click here to learn more about what unsupervised learning means, why it’s important and examples. See also supervised learning, machine learning.

V

Variance
Variance is the amount that the estimate of the target function will change if different training data was used. Another way of saying this is that variance measures the degree to which a model picks up noise as opposed to signal. High variance is synonymous with overfitting.

W

Web Scraping
Web scraping is the process of extracting specific data from websites for further usage. Web scraping can be done automatically by writing a program to capture the necessary information from a website.

Width
Width measures the number of features in a dataset.

incubator

Stay Current. Stay Connected.

Sign up for our newsletter!