Misplaced Pages

Glossary of computer science

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Application code)

Computer science

This glossary of computer science is a list of definitions of terms and concepts used in computer science, its sub-disciplines, and related fields, including terms relevant to software, data science, and computer programming.

Contents: 

A

abstract data type (ADT)
A mathematical model for data types in which a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This contrasts with data structures, which are concrete representations of data from the point of view of an implementer rather than a user.
abstract method
One with only a signature and no implementation body. It is often used to specify that a subclass must provide an implementation of the method. Abstract methods are used to specify interfaces in some computer languages.
abstraction
1.  In software engineering and computer science, the process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more closely attend to other details of interest; it is also very similar in nature to the process of generalization.
2.  The result of this process: an abstract concept-object created by keeping common features or attributes to various concrete objects or systems of study.
agent architecture
A blueprint for software agents and intelligent control systems depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures.
agent-based model (ABM)
A class of computational models for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) with a view to assessing their effects on the system as a whole. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to introduce randomness.
aggregate function
In database management, a function in which the values of multiple rows are grouped together to form a single value of more significant meaning or measurement, such as a sum, count, or max.
agile software development
An approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer(s)/end user(s). It advocates adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages rapid and flexible response to change.
algorithm
An unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks. They are ubiquitous in computing technologies.
algorithm design
A method or mathematical process for problem-solving and for engineering algorithms. The design of algorithms is part of many solution theories of operation research, such as dynamic programming and divide-and-conquer. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, such as the template method pattern and decorator pattern.
algorithmic efficiency
A property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
American Standard Code for Information Interchange (ASCII)
A character encoding standard for electronic communications. ASCII codes represent text in computers, telecommunications equipment, and other devices. Most modern character-encoding schemes are based on ASCII, although they support many additional characters.
application programming interface (API)
A set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer.
application software

Also simply application or app.

Computer software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Common examples of applications include word processors, spreadsheets, accounting applications, web browsers, media players, aeronautical flight simulators, console games, and photo editors. This contrasts with system software, which is mainly involved with managing the computer's most basic running operations, often without direct input from the user. The collective noun application software refers to all applications collectively.
array data structure

Also simply array.

A data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called a one-dimensional array.
artifact
One of many kinds of tangible by-products produced during the development of software. Some artifacts (e.g. use cases, class diagrams, and other Unified Modeling Language (UML) models, requirements, and design documents) help describe the function, architecture, and design of software. Other artifacts are concerned with the process of development itself—such as project plans, business cases, and risk assessments.
artificial intelligence (AI)

Also machine intelligence.

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science, AI research is defined as the study of "intelligent agents": devices capable of perceiving their environment and taking actions that maximize the chance of successfully achieving their goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".
ASCII
See American Standard Code for Information Interchange.
assertion
In computer programming, a statement that a predicate (Boolean-valued function, i.e. a true–false expression) is always true at that point in code execution. It can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run and if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception.
associative array
An associative array, map, symbol table, or dictionary is an abstract data type composed of a collection of (key, value) pairs, such that each possible key appears at most once in the collection. Operations associated with this data type allow:
  • the addition of a pair to the collection
  • the removal of a pair from the collection
  • the modification of an existing pair
  • the lookup of a value associated with a particular key
automata theory
The study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science).
automated reasoning
An area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy.

B

bandwidth
The maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth.
Bayesian programming
A formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.
benchmark
The act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.
best, worst and average case
Expressions of what the resource usage is at least, at most, and on average, respectively, for a given algorithm. Usually the resource being considered is running time, i.e. time complexity, but it could also be memory or some other resource. Best case is the function which performs the minimum number of steps on input data of n elements; worst case is the function which performs the maximum number of steps on input data of size n; average case is the function which performs an average number of steps on input data of n elements.
big data
A term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.
big O notation
A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
binary number
In mathematics and digital electronics, a number expressed in the base-2 numeral system or binary numeral system, which uses only two symbols: typically 0 (zero) and 1 (one).
binary search algorithm

Also simply binary search, half-interval search, logarithmic search, or binary chop.

A search algorithm that finds the position of a target value within a sorted array.
binary tree
A tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well.
bioinformatics
An interdisciplinary field that combines biology, computer science, information engineering, mathematics, and statistics to develop methods and software tools for analyzing and interpreting biological data. Bioinformatics is widely used for in silico analyses of biological queries using mathematical and statistical techniques.
bit
A basic unit of information used in computing and digital communications; a portmanteau of binary digit. A binary digit can have one of two possible values, and may be physically represented with a two-state device. These state values are most commonly represented as either a 0or1.
bit rate (R)

Also bitrate.

In telecommunications and computing, the number of bits that are conveyed or processed per unit of time.
blacklist

Also block list.

In computing, a basic access control mechanism that allows through all elements (email addresses, users, passwords, URLs, IP addresses, domain names, file hashes, etc.), except those explicitly mentioned in a list of prohibited elements. Those items on the list are denied access. The opposite is a whitelist, which means only items on the list are allowed through whatever gate is being used while all other elements are blocked. A greylist contains items that are temporarily blocked (or temporarily allowed) until an additional step is performed.
BMP file format

Also bitmap image file, device independent bitmap (DIB) file format, or simply bitmap.

A raster graphics image file format used to store bitmap digital images independently of the display device (such as a graphics adapter), used especially on Microsoft Windows and OS/2 operating systems.
Boolean data type
A data type that has one of two possible values (usually denoted true and false), intended to represent the two truth values of logic and Boolean algebra. It is named after George Boole, who first defined an algebraic system of logic in the mid-19th century. The Boolean data type is primarily associated with conditional statements, which allow different actions by changing control flow depending on whether a programmer-specified Boolean condition evaluates to true or false. It is a special case of a more general logical data type (see probabilistic logic)—i.e. logic need not always be Boolean.
Boolean expression
An expression used in a programming language that returns a Boolean value when evaluated, that is one of true or false. A Boolean expression may be composed of a combination of the Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.
Boolean algebra
In mathematics and mathematical logic, the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0, respectively. Contrary to elementary algebra, where the values of the variables are numbers and the prime operations are addition and multiplication, the main operations of Boolean algebra are the conjunction and (denoted as ∧), the disjunction or (denoted as ∨), and the negation not (denoted as ¬). It is thus a formalism for describing logical relations in the same way that elementary algebra describes numeric relations.
byte
A unit of digital information that most commonly consists of eight bits, representing a binary number. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.
booting
The procedures implemented in starting up a computer or computer appliance until it can be used. It can be initiated by hardware such as a button press or by a software command. After the power is switched on, the computer is relatively dumb and can read only part of its storage called read-only memory. There, a small program is stored called firmware. It does power-on self-tests and, most importantly, allows access to other types of memory like a hard disk and main memory. The firmware loads bigger programs into the computer's main memory and runs it.

C

callback
Any executable code that is passed as an argument to other code that is expected to "call back" (execute) the argument at a given time. This execution may be immediate, as in a synchronous callback, or it might happen at a later time, as in an asynchronous callback.
central processing unit (CPU)
The electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more specifically to its processing unit and control unit (CU), distinguishing these core elements of a computer from external components such as main memory and I/O circuitry.
character
A unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language.
cipher

Also cypher.

In cryptography, an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure.
class
In object-oriented programming, an extensible program-code-template for creating objects, providing initial values for state (member variables) and implementations of behavior (member functions or methods). In many languages, the class name is used as the name for the class (the template itself), the name for the default constructor of the class (a subroutine that creates objects), and as the type of objects generated by instantiating the class; these distinct concepts are easily conflated.
class-based programming

Also class-orientation.

A style of object-oriented programming (OOP) in which inheritance occurs via defining "classes" of objects, instead of via the objects alone (compare prototype-based programming).
client
A piece of computer hardware or software that accesses a service made available by a server. The server is often (but not always) on another computer system, in which case the client accesses the service by way of a network. The term applies to the role that programs or devices play in the client–server model.
cleanroom software engineering
A software development process intended to produce software with a certifiable level of reliability. The cleanroom process was originally developed by Harlan Mills and several of his colleagues including Alan Hevner at IBM. The focus of the cleanroom process is on defect prevention, rather than defect removal.
closure

Also lexical closure or function closure.

A technique for implementing lexically scoped name binding in a language with first-class functions. Operationally, a closure is a record storing a function together with an environment.
cloud computing
Shared pools of configurable computer system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale, similar to a public utility.
code library
A collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications. In IBM's OS/360 and its successors they are referred to as partitioned data sets.
coding
Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms in a chosen programming language (commonly referred to as coding). The source code of a program is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate the performance of a task for solving a given problem. The process of programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
coding theory
The study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
cognitive science
The interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology.
collection
A collection or container is a grouping of some variable number of data items (possibly zero) that have some shared significance to the problem being solved and need to be operated upon together in some controlled fashion. Generally, the data items will be of the same type or, in languages supporting inheritance, derived from some common ancestor type. A collection is a concept applicable to abstract data types, and does not prescribe a specific implementation as a concrete data structure, though often there is a conventional choice (see Container for type theory discussion).
comma-separated values (CSV)
A delimited text file that uses a comma to separate values. A CSV file stores tabular data (numbers and text) in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. The use of the comma as a field separator is the source of the name for this file format.
compiler
A computer program that transforms computer code written in one programming language (the source language) into another programming language (the target language). Compilers are a type of translator that support digital devices, primarily computers. The name compiler is primarily used for programs that translate source code from a high-level programming language to a lower-level language (e.g. assembly language, object code, or machine code) to create an executable program.
computability theory
also known as recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, recursion theory overlaps with proof theory and effective descriptive set theory.
computation
Any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model, e.g. an algorithm. The study of computation is paramount to the discipline of computer science.
computational biology
Involves the development and application of data-analytical and theoretical methods, mathematical modelling and computational simulation techniques to the study of biological, ecological, behavioural, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science, and evolution. Computational biology is different from biological computing, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers.
computational chemistry
A branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids.
computational complexity theory
A subfield of computational science which focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
computational model
A mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation.
computational neuroscience

Also theoretical neuroscience or mathematical neuroscience.

A branch of neuroscience which employs mathematical models, theoretical analysis, and abstractions of the brain to understand the principles that govern the development, structure, physiology, and cognitive abilities of the nervous system.
computational physics
Is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science.
computational science

Also scientific computing and scientific computation (SC).

An interdisciplinary field that uses advanced computing capabilities to understand and solve complex problems. It is an area of science which spans many disciplines, but at its core it involves the development of computer models and simulations to understand complex natural systems.
computational steering
Is the practice of manually intervening with an otherwise autonomous computational process, to change its outcome.
computer
A device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks.
computer architecture
A set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation.
computer data storage

Also simply storage or memory.

A technology consisting of computer components and recording media that are used to retain digital data. Data storage is a core function and fundamental component of all modern computer systems.
computer ethics
A part of practical philosophy concerned with how computing professionals should make decisions regarding professional and social conduct.
computer graphics
Pictures and films created using computers. Usually, the term refers to computer-generated image data created with the help of specialized graphical hardware and software. It is a vast and recently developed area of computer science.
computer network

Also data network.

A digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections (data links) between nodes. These data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi.
computer program
Is a collection of instructions that can be executed by a computer to perform a specific task.
computer programming
The process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms in a chosen programming language (commonly referred to as coding). The source code of a program is written in one or more programming languages. The purpose of programming is to find a sequence of instructions that will automate the performance of a task for solving a given problem. The process of programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
computer science
The theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems.
computer scientist
A person who has acquired the knowledge of computer science, the study of the theoretical foundations of information and computation and their application.
computer security

Also cybersecurity or information technology security (IT security).

The protection of computer systems from theft or damage to their hardware, software, or electronic data, as well as from disruption or misdirection of the services they provide.
computer vision
An interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.
computing
Is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes study of algorithmic processes and development of both hardware and software. It has scientific, engineering, mathematical, technological and social aspects. Major computing fields include computer engineering, computer science, cybersecurity, data science, information systems, information technology and software engineering.
concatenation
In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalisations of concatenation theory, also called string theory, string concatenation is a primitive notion.
Concurrency
The ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability property of a program, algorithm, or problem into order-independent or partially-ordered components or units.
conditional

Also conditional statement, conditional expression, and conditional construct.

A feature of a programming language which performs different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false. Apart from the case of branch predication, this is always achieved by selectively altering the control flow based on some condition.
container
Is a class, a data structure, or an abstract data type (ADT) whose instances are collections of other objects. In other words, they store objects in an organized way that follows specific access rules. The size of the container depends on the number of objects (elements) it contains. Underlying (inherited) implementations of various container types may vary in size and complexity, and provide flexibility in choosing the right implementation for any given scenario.
continuation-passing style (CPS)
A style of functional programming in which control is passed explicitly in the form of a continuation. This is contrasted with direct style, which is the usual style of programming. Gerald Jay Sussman and Guy L. Steele, Jr. coined the phrase in AI Memo 349 (1975), which sets out the first version of the Scheme programming language.
control flow

Also flow of control.

The order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language.
Creative Commons (CC)
An American non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses, known as Creative Commons licenses, free of charge to the public.
cryptography
Or cryptology, is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages; various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation are central to modern cryptography. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.
CSV
See comma-separated values.
cyberbullying

Also cyberharassment or online bullying.

A form of bullying or harassment using electronic means.
cyberspace
Widespread, interconnected digital technology.

D

daemon
In multitasking computer operating systems, a daemon (/ˈdiːmən/ or /ˈdeɪmən/) is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letter d, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example, syslogd is a daemon that implements system logging facility, and sshd is a daemon that serves incoming SSH connections.
Data
data center

Also data centre.

A dedicated space used to house computer systems and associated components, such as telecommunications and data storage systems. It generally includes redundant or backup components and infrastructure for power supply, data communications connections, environmental controls (e.g. air conditioning and fire suppression) and various security devices.
database
An organized collection of data, generally stored and accessed electronically from a computer system. Where databases are more complex, they are often developed using formal design and modeling techniques.
data mining
Is a process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.
data science
An interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science.
data structure
A data organization, management, and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data.
data type

Also simply type.

An attribute of data which tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support common data types of real, integer, and Boolean. A data type constrains the values that an expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored. A type of value from which an expression may take its value.
debugging
The process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or the system as a whole. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, and profiling.
declaration
In computer programming, a language construct that specifies properties of an identifier: it declares what a word (identifier) "means". Declarations are most commonly used for functions, variables, constants, and classes, but can also be used for other entities such as enumerations and type definitions. Beyond the name (the identifier itself) and the kind of entity (function, variable, etc.), declarations typically specify the data type (for variables and constants), or the type signature (for functions); types may also include dimensions, such as for arrays. A declaration is used to announce the existence of the entity to the compiler; this is important in those strongly typed languages that require functions, variables, and constants, and their types, to be specified with a declaration before use, and is used in forward declaration. The term "declaration" is frequently contrasted with the term "definition", but meaning and usage varies significantly between languages.
digital data
In information theory and information systems, the discrete, discontinuous representation of information or works. Numbers and letters are commonly used representations.
digital signal processing (DSP)
The use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency.
discrete event simulation (DES)
A model of the operation of a system as a discrete sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation can directly jump in time from one event to the next.
disk storage
(Also sometimes called drive storage) is a general category of storage mechanisms where data is recorded by various electronic, magnetic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive (HDD) containing a non-removable disk, the floppy disk drive (FDD) and its removable floppy disk, and various optical disc drives (ODD) and associated optical disc media.
distributed computing
A field of computer science that studies distributed systems. A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another. The components interact with one another in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications.
divide and conquer algorithm
An algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
DNS
See Domain Name System.
documentation
Written text or illustration that accompanies computer software or is embedded in the source code. It either explains how it operates or how to use it, and may mean different things to people in different roles.
domain
Is the targeted subject area of a computer program. It is a term used in software engineering. Formally it represents the target subject of a specific programming project, whether narrowly or broadly defined.
Domain Name System (DNS)
A hierarchical and decentralized naming system for computers, services, or other resources connected to the Internet or to a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System has been an essential component of the functionality of the Internet since 1985.
double-precision floating-point format
A computer number format. It represents a wide dynamic range of numerical values by using a floating radix point.
download
In computer networks, to receive data from a remote system, typically a server such as a web server, an FTP server, an email server, or other similar systems. This contrasts with uploading, where data is sent to a remote server. A download is a file offered for downloading or that has been downloaded, or the process of receiving such a file.

E

edge device
A device which provides an entry point into enterprise or service provider core networks. Examples include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN) access devices. Edge devices also provide connections into carrier and service provider networks. An edge device that connects a local area network to a high speed switch or backbone (such as an ATM switch) may be called an edge concentrator.
emulator
Hardware or software that enables one computer system (called the host) to behave like another computer system.
encryption
In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information. Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor. For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key, but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users. Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often utilized in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes utilize the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
event
An action or occurrence recognized by software, often originating asynchronously from the external environment, that may be handled by the software. Because an event is an entity which encapsulates the action and the contextual variables triggering the action, the acrostic mnemonic "Execution Variable Encapsulating Named Trigger" is often used to clarify the concept.
event-driven programming
A programming paradigm in which the flow of the program is determined by events such as user actions (mouse clicks, key presses), sensor outputs, or messages from other programs or threads. Event-driven programming is the dominant paradigm used in graphical user interfaces and other applications (e.g. JavaScript web applications) that are centered on performing certain actions in response to user input. This is also true of programming for device drivers (e.g. P in USB device driver stacks).
evolutionary computing
A family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial-and-error problem-solvers with a metaheuristic or stochastic optimization character.
executable

Also executable code, executable file, executable program, or simply executable.

Causes a computer "to perform indicated tasks according to encoded instructions," as opposed to a data file that must be parsed by a program to be meaningful. The exact interpretation depends upon the use - while "instructions" is traditionally taken to mean machine code instructions for a physical CPU, in some contexts a file containing bytecode or scripting language instructions may also be considered executable.
execution
In computer and software engineering is the process by which a computer or virtual machine executes the instructions of a computer program. Each instruction of a program is a description of a particular action which to be carried out in order for a specific problem to be solved; as instructions of a program and therefore the actions they describe are being carried out by an executing machine, specific effects are produced in accordance to the semantics of the instructions being executed.
exception handling
The process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often disrupting the normal flow of program execution. It is provided by specialized programming language constructs, computer hardware mechanisms like interrupts, or operating system IPC facilities like signals.
Existence detection
An existence check before reading a file can catch and/or prevent a fatal error.
expression
In a programming language, a combination of one or more constants, variables, operators, and functions that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, as for mathematical expressions, is called evaluation.

F

fault-tolerant computer system
A system designed around the concept of fault tolerance. In essence, they must be able to continue working to a level of satisfaction in the presence of errors or breakdowns.
feasibility study
An investigation which aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained.
field
Data that has several parts, known as a record, can be divided into fields. Relational databases arrange data as sets of database records, so called rows. Each record consists of several fields; the fields of all records form the columns. Examples of fields: name, gender, hair colour.
filename extension
An identifier specified as a suffix to the name of a computer file. The extension indicates a characteristic of the file contents or its intended use.
filter (software)
A computer program or subroutine to process a stream, producing another stream. While a single filter can be used individually, they are frequently strung together to form a pipeline.
floating point arithmetic
In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
significand × base exponent , {\displaystyle {\text{significand}}\times {\text{base}}^{\text{exponent}},}
where significand is an integer, base is an integer greater than or equal to two, and exponent is also an integer. For example:
1.2345 = 12345 significand × 10 base 4 exponent . {\displaystyle 1.2345=\underbrace {12345} _{\text{significand}}\times \underbrace {10} _{\text{base}}\!\!\!\!\!\!^{\overbrace {-4} ^{\text{exponent}}}.}
for loop

Also for-loop.

A control flow statement for specifying iteration, which allows code to be executed repeatedly. Various keywords are used to specify this statement: descendants of ALGOL use "for", while descendants of Fortran use "do". There are also other possibilities, e.g. COBOL uses "PERFORM VARYING".
formal methods
A set of mathematically based techniques for the specification, development, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.
formal verification
The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
functional programming
A programming paradigm—a style of building the structure and elements of computer programs–that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm in that programming is done with expressions or declarations instead of statements.

G

game theory
The study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.
garbage in, garbage out (GIGO)
A term used to describe the concept that flawed or nonsense input data produces nonsense output or "garbage". It can also refer to the unforgiving nature of programming, in which a poorly written program might produce nonsensical behavior.
Graphics Interchange Format
gigabyte
A multiple of the unit byte for digital information. The prefix giga means 10 in the International System of Units (SI). Therefore, one gigabyte is 1000000000bytes. The unit symbol for the gigabyte is GB.
global variable
In computer programming, a variable with global scope, meaning that it is visible (hence accessible) throughout the program, unless shadowed. The set of all global variables is known as the global environment or global state. In compiled languages, global variables are generally static variables, whose extent (lifetime) is the entire runtime of the program, though in interpreted languages (including command-line interpreters), global variables are generally dynamically allocated when declared, since they are not known ahead of time.
graph theory
In mathematics, the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically.

H

handle
In computer programming, a handle is an abstract reference to a resource that is used when application software references blocks of memory or objects that are managed by another system like a database or an operating system.
hard problem
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
hash function
Any function that can be used to map data of arbitrary size to data of a fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. Hash functions are often used in combination with a hash table, a common data structure used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file.
hash table
In computing, a hash table (hash map) is a data structure that implements an associative array abstract data type, a structure that can map keys to values. A hash table uses a hash function to compute an index into an array of buckets or slots, from which the desired value can be found.
heap
A specialized tree-based data structure which is essentially an almost complete tree that satisfies the heap property: if P is a parent node of C, then the key (the value) of P is either greater than or equal to (in a max heap) or less than or equal to (in a min heap) the key of C. The node at the "top" of the heap (with no parents) is called the root node.
heapsort
A comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum.
human-computer interaction (HCI)
Researches the design and use of computer technology, focused on the interfaces between people (users) and computers. Researchers in the field of HCI both observe the ways in which humans interact with computers and design technologies that let humans interact with computers in novel ways. As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study.

I

identifier
In computer languages, identifiers are tokens (also called symbols) which name language entities. Some of the kinds of entities an identifier might denote include variables, types, labels, subroutines, and packages.
IDE
Integrated development environment.
image processing
imperative programming
A programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.
incremental build model
A method of software development where the product is designed, implemented and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping.
information space analysis
A deterministic method, enhanced by machine intelligence, for locating and assessing resources for team-centric efforts.
information visualization
inheritance
In object-oriented programming, the mechanism of basing an object or class upon another object (prototype-based inheritance) or class (class-based inheritance), retaining similar implementation. Also defined as deriving new classes (sub classes) from existing ones (super class or base class) and forming them into a hierarchy of classes.
input/output (I/O)

Also informally io or IO.

The communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to "perform I/O" is to perform an input or output operation.
insertion sort
A simple sorting algorithm that builds the final sorted array (or list) one item at a time.
instruction cycle

Also fetch–decode–execute cycle or simply fetch-execute cycle.

The cycle which the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage.
integer
A datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware, including virtual machines, nearly always provide a way to represent a processor register or memory address as an integer.
integrated development environment (IDE)
A software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools, and a debugger.
integration testing
(sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements. It occurs after unit testing and before validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
intellectual property (IP)
A category of legal property that includes intangible creations of the human intellect. There are many types of intellectual property, and some countries recognize more than others. The most well-known types are copyrights, patents, trademarks, and trade secrets.
intelligent agent
In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
interface
A shared boundary across which two or more separate components of a computer system exchange information. The exchange can be between software, computer hardware, peripheral devices, humans, and combinations of these. Some computer hardware devices, such as a touchscreen, can both send and receive data through the interface, while others such as a mouse or microphone may only provide an interface to send data to a given system.
internal documentation
Computer software is said to have Internal Documentation if the notes on how and why various parts of code operate is included within the source code as comments. It is often combined with meaningful variable names with the intention of providing potential future programmers a means of understanding the workings of the code. This contrasts with external documentation, where programmers keep their notes and explanations in a separate document.
internet
The global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies.
internet bot

Also web robot, robot, or simply bot.

A software application that runs automated tasks (scripts) over the Internet. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers at many times the speed of a human.
interpreter
A computer program that directly executes instructions written in a programming or scripting language, without requiring them to have been previously compiled into a machine language program.
invariant
One can encounter invariants that can be relied upon to be true during the execution of a program, or during some portion of it. It is a logical assertion that is always held to be true during a certain phase of execution. For example, a loop invariant is a condition that is true at the beginning and the end of every execution of a loop.
iteration
Is the repetition of a process in order to generate an outcome. The sequence will approach some end point or end value. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. In mathematics and computer science, iteration (along with the related technique of recursion) is a standard element of algorithms.

J

Java
A general-purpose programming language that is class-based, object-oriented(although not a pure OO language), and designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere" (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.

K

kernel
The first section of an operating system to load into memory. As the center of the operating system, the kernel needs to be small, efficient, and loaded into a protected area in the memory so that it cannot be overwritten. It may be responsible for such essential tasks as disk drive management, file management, memory management, process management, etc.

L

library (computing)
A collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values, or type specifications.
linear search

Also sequential search.

A method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.
linked list
A linear collection of data elements, whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence.
linker
or link editor, is a computer utility program that takes one or more object files generated by a compiler or an assembler and combines them into a single executable file, library file, or another 'object' file. A simpler version that writes its output directly to memory is called the loader, though loading is typically considered a separate process.
list
An abstract data type that represents a countable number of ordered values, where the same value may occur more than once. An instance of a list is a computer representation of the mathematical concept of a finite sequence; the (potentially) infinite analog of a list is a stream. Lists are a basic example of containers, as they contain other values. If the same value occurs multiple times, each occurrence is considered a distinct item.
loader
The part of an operating system that is responsible for loading programs and libraries. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution. Loading a program involves reading the contents of the executable file containing the program instructions into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code.
logic error
In computer programming, a bug in a program that causes it to operate incorrectly, but not to terminate abnormally (or crash). A logic error produces unintended or undesired output or other behaviour, although it may not immediately be recognized as such.
logic programming
A type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP), and Datalog.

M

machine learning (ML)
The scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.
machine vision (MV)
The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance.
mathematical logic
A subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.
matrix
In mathematics, a matrix, (plural matrices), is a rectangular array (see irregular matrix) of numbers, symbols, or expressions, arranged in rows and columns.
memory
Computer data storage, often called storage, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
merge sort

Also mergesort.

An efficient, general-purpose, comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output. Merge sort is a divide and conquer algorithm that was invented by John von Neumann in 1945. A detailed description and analysis of bottom-up mergesort appeared in a report by Goldstine and von Neumann as early as 1948.
method
In object-oriented programming (OOP), a procedure associated with a message and an object. An object consists of data and behavior. The data and behavior comprise an interface, which specifies how the object may be utilized by any of various consumers of the object.
methodology
In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.
modem

Portmanteau of modulator-demodulator.

A hardware device that converts data into a format suitable for a transmission medium so that it can be transmitted from one computer to another (historically along telephone wires). A modem modulates one or more carrier wave signals to encode digital information for transmission and demodulates signals to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded reliably to reproduce the original digital data. Modems can be used with almost any means of transmitting analog signals from light-emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data.

N

natural language processing (NLP)
A subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation.
node
Is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.
number theory
A branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions.
numerical analysis
The study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics).
numerical method
In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm.

O

object
An object can be a variable, a data structure, a function, or a method, and as such, is a value in memory referenced by an identifier. In the class-based object-oriented programming paradigm, object refers to a particular instance of a class, where the object can be a combination of variables, functions, and data structures. In relational database management, an object can be a table or column, or an association between data and a database entity (such as relating a person's age to a specific person).
object code

Also object module.

The product of a compiler. In a general sense object code is a sequence of statements or instructions in a computer language, usually a machine code language (i.e., binary) or an intermediate language such as register transfer language (RTL). The term indicates that the code is the goal or result of the compiling process, with some early sources referring to source code as a "subject program."
object-oriented analysis and design (OOAD)
A technical approach for analyzing and designing an application, system, or business by applying object-oriented programming, as well as using visual modeling throughout the software development process to guide stakeholder communication and product quality.
object-oriented programming (OOP)
A programming paradigm based on the concept of "objects", which can contain data, in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods). A feature of objects is an object's procedures that can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.
open-source software (OSS)
A type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration.
operating system (OS)
System software that manages computer hardware, software resources, and provides common services for computer programs.
optical fiber
A flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer.

P

pair programming
An agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.
parallel computing
A type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism.
parameter

Also formal argument.

In computer programming, a special kind of variable, used in a subroutine to refer to one of the pieces of data provided as input to the subroutine. These pieces of data are the values of the arguments (often called actual arguments or actual parameters) with which the subroutine is going to be called/invoked. An ordered list of parameters is usually included in the definition of a subroutine, so that, each time the subroutine is called, its arguments for that call are evaluated, and the resulting values can be assigned to the corresponding parameters.
peripheral
Any auxiliary or ancillary device connected to or integrated within a computer system and used to send information to or retrieve information from the computer. An input device sends data or instructions to the computer; an output device provides output from the computer to the user; and an input/output device performs both functions.
pointer
Is an object in many programming languages that stores a memory address. This can be that of another value located in computer memory, or in some cases, that of memory-mapped computer hardware. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture.
postcondition
In computer programming, a condition or predicate that must always be true just after the execution of some section of code or after an operation in a formal specification. Postconditions are sometimes tested using assertions within the code itself. Often, postconditions are simply included in the documentation of the affected section of code.
precondition
In computer programming, a condition or predicate that must always be true just prior to the execution of some section of code or before an operation in a formal specification. If a precondition is violated, the effect of the section of code becomes undefined and thus may or may not carry out its intended work. Security problems can arise due to incorrect preconditions.
primary storage
(Also known as main memory, internal memory or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
primitive data type
priority queue
An abstract data type which is like a regular queue or stack data structure, but where additionally each element has a "priority" associated with it. In a priority queue, an element with high priority is served before an element with low priority. In some implementations, if two elements have the same priority, they are served according to the order in which they were enqueued, while in other implementations, ordering of elements with the same priority is undefined.
procedural programming
Procedural generation
procedure
In computer programming, a subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Subroutines may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a subroutine may be called a routine, subprogram, function, method, or procedure. Technically, these terms all have different definitions. The generic, umbrella term callable unit is sometimes used.
program lifecycle phase
Program lifecycle phases are the stages a computer program undergoes, from initial creation to deployment and execution. The phases are edit time, compile time, link time, distribution time, installation time, load time, and run time.
programming language
A formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.
programming language implementation
Is a system for executing computer programs. There are two general approaches to programming language implementation: interpretation and compilation.
programming language theory
(PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and of their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, linguistics and even cognitive science. It has become a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications.
Prolog
Is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.
Python
Is an interpreted, high-level and general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.

Q

quantum computing
The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.
queue
A collection in which the entities in the collection are kept in order and the principal (or only) operations on the collection are the addition of entities to the rear terminal position, known as enqueue, and removal of entities from the front terminal position, known as dequeue.
quicksort

Also partition-exchange sort.

An efficient sorting algorithm which serves as a systematic method for placing the elements of a random access file or an array in order.

R

R programming language
R is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
radix

Also base.

In digital numeral systems, the number of unique digits, including the digit zero, used to represent numbers in a positional numeral system. For example, in the decimal/denary system (the most common system in use today) the radix (base number) is ten, because it uses the ten digits from 0 through 9, and all other numbers are uniquely specified by positional combinations of these ten base digits; in the binary system that is the standard in computing, the radix is two, because it uses only two digits, 0 and 1, to uniquely specify each number.
record
A record (also called a structure, struct, or compound data) is a basic data structure. Records in a database or spreadsheet are usually called "rows".
recursion
Occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines an infinite number of instances (function values), it is often done in such a way that no infinite loop or infinite chain of references can occur.
reference
Is a value that enables a program to indirectly access a particular datum, such as a variable's value or a record, in the computer's memory or in some other storage device. The reference is said to refer to the datum, and accessing the datum is called dereferencing the reference.
reference counting
A programming technique of storing the number of references, pointers, or handles to a resource, such as an object, a block of memory, disk space, and others. In garbage collection algorithms, reference counts may be used to deallocate objects which are no longer needed.
regression testing
(rarely non-regression testing) is re-running functional and non-functional tests to ensure that previously developed and tested software still performs after a change. If not, that would be called a regression. Changes that may require regression testing include bug fixes, software enhancements, configuration changes, and even substitution of electronic components. As regression test suites tend to grow with each found defect, test automation is frequently involved. Sometimes a change impact analysis is performed to determine an appropriate subset of tests (non-regression analysis).
relational database
Is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A software system used to maintain relational databases is a relational database management system (RDBMS). Many relational database systems have an option of using the SQL (Structured Query Language) for querying and maintaining the database.
reliability engineering
A sub-discipline of systems engineering that emphasizes dependability in the lifecycle management of a product. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
requirements analysis
In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating and managing software or system requirements.
robotics
An interdisciplinary branch of engineering and science that includes mechanical engineering, electronic engineering, information engineering, computer science, and others. Robotics involves design, construction, operation, and use of robots, as well as computer systems for their perception, control, sensory feedback, and information processing. The goal of robotics is to design intelligent machines that can help and assist humans in their day-to-day lives and keep everyone safe.
round-off error

Also rounding error.

The difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers (which in theory have infinitely many digits), one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors.
router
A networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g. the Internet) until it reaches its destination node.
routing table
In computer networking a routing table, or routing information base (RIB), is a data table stored in a router or a network host that lists the routes to particular network destinations, and in some cases, metrics (distances) associated with those routes. The routing table contains information about the topology of the network immediately around it.
run time
Runtime, run time, or execution time is the final phase of a computer program's life cycle, in which the code is being executed on the computer's central processing unit (CPU) as machine code. In other words, "runtime" is the running phase of a program.
run time error
A runtime error is detected after or during the execution (running state) of a program, whereas a compile-time error is detected by the compiler before the program is ever executed. Type checking, register allocation, code generation, and code optimization are typically done at compile time, but may be done at runtime depending on the particular language and compiler. Many other runtime errors exist and are handled differently by different programming languages, such as division by zero errors, domain errors, array subscript out of bounds errors, arithmetic underflow errors, several types of underflow and overflow errors, and many other runtime errors generally considered as software bugs which may or may not be caught and handled by any particular computer language.

S

search algorithm
Any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values.
secondary storage
Also known as external memory or auxiliary storage, differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
selection sort
Is an in-place comparison sorting algorithm. It has an O(n) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.
semantics
In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation.
sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order does matter. Like a set, it contains members (also called elements, or terms). The number of elements (possibly infinite) is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and order does matter. Formally, a sequence can be defined as a function whose domain is either the set of the natural numbers (for infinite sequences) or the set of the first n natural numbers (for a sequence of finite length n). The position of an element in a sequence is its rank or index; it is the natural number for which the element is the image. The first element has index 0 or 1, depending on the context or a specific convention. When a symbol is used to denote a sequence, the nth element of the sequence is denoted by this symbol with n as subscript; for example, the nth element of the Fibonacci sequence F is generally denoted Fn. For example, (M, A, R, Y) is a sequence of letters with the letter 'M' first and 'Y' last. This sequence differs from (A, R, M, Y). Also, the sequence (1, 1, 2, 3, 5, 8), which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, such as the sequence of all even positive integers (2, 4, 6, ...). In computing and computer science, finite sequences are sometimes called strings, words or lists, the different names commonly corresponding to different ways to represent them in computer memory; infinite sequences are called streams. The empty sequence ( ) is included in most notions of sequence, but may be excluded depending on the context.
serializability
In concurrency control of databases, transaction processing (transaction management), and various transactional applications (e.g., transactional memory and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e. without overlapping in time. Transactions are normally executed concurrently (they overlap), since this is the most efficient way. Serializability is the major correctness criterion for concurrent transactions' executions. It is considered the highest level of isolation between transactions, and plays an essential role in concurrency control. As such it is supported in all general purpose database systems. Strong strict two-phase locking (SS2PL) is a popular serializability mechanism utilized in most of the database systems (in various variants) since their early days in the 1970s.
serialization
Is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later (possibly in a different computer environment). When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object. For many complex objects, such as those that make extensive use of references, this process is not straightforward. Serialization of object-oriented objects does not include any of their associated methods with which they were previously linked. This process of serializing an object is also called marshalling an object in some situations. The opposite operation, extracting a data structure from a series of bytes, is deserialization, (also called unserialization or unmarshalling).
service level agreement
(SLA), is a commitment between a service provider and a client. Particular aspects of the service – quality, availability, responsibilities – are agreed between the service provider and the service user. The most common component of an SLA is that the services should be provided to the customer as agreed upon in the contract. As an example, Internet service providers and telcos will commonly include service level agreements within the terms of their contracts with customers to define the level(s) of service being sold in plain language terms. In this case the SLA will typically have a technical definition in mean time between failures (MTBF), mean time to repair or mean time to recovery (MTTR); identifying which party is responsible for reporting faults or paying fees; responsibility for various data rates; throughput; jitter; or similar measurable details.
set
Is an abstract data type that can store unique values, without any particular order. It is a computer implementation of the mathematical concept of a finite set. Unlike most other collection types, rather than retrieving a specific element from a set, one typically tests a value for membership in a set.
singleton variable
A variable that is referenced only once. May be used as a dummy argument in a function call, or when its address is assigned to another variable which subsequently accesses its allocated storage. Singleton variables sometimes occur because a mistake has been made – such as assigning a value to a variable and forgetting to use it later, or mistyping one instance of the variable name. Some compilers and lint-like tools flag occurrences of singleton variables.
software
Computer software, or simply software, is a collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.
software agent
Is a computer program that acts for a user or other program in a relationship of agency, which derives from the Latin agere (to do): an agreement to act on one's behalf. Such "action on behalf of" implies the authority to decide which, if any, action is appropriate. Agents are colloquially known as bots, from robot. They may be embodied, as when execution is paired with a robot body, or as software such as a chatbot executing on a phone (e.g. Siri) or other computing device. Software agents may be autonomous or work together with other agents or people. Software agents interacting with people (e.g. chatbots, human-robot interaction environments) may possess human-like qualities such as natural language understanding and speech, personality or embody humanoid form (see Asimo).
software construction
Is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging. It is linked to all the other software engineering disciplines, most strongly to software design and software testing.
software deployment
Is all of the activities that make a software system available for use.
software design
Is the process by which an agent creates a specification of a software artifact, intended to accomplish goals, using a set of primitive components and subject to constraints. Software design may refer to either "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying complex systems" or "the activity following requirements specification and before programming, as ... a stylized software engineering process."
software development
Is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.
software development process
In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming.
software engineering
Is the systematic application of engineering approaches to the development of software. Software engineering is a computing discipline.
software maintenance
In software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes.
software prototyping
Is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing. A prototype typically simulates only a few aspects of, and may be completely different from, the final product.
software requirements specification
(SRS), is a description of a software system to be developed. The software requirements specification lays out functional and non-functional requirements, and it may include a set of use cases that describe user interactions that the software must provide to the user for perfect interaction.
software testing
Is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use.
sorting algorithm
Is an algorithm that puts elements of a list in a certain order. The most frequently used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output. More formally, the output of any sorting algorithm must satisfy two conditions:
  1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
  2. The output is a permutation (a reordering, yet retaining all of the original elements) of the input.
Further, the input data is often stored in an array, which allows random access, rather than a list, which only allows sequential access; though many algorithms can be applied to either type of data after suitable modification.
source code
In computing, source code is any collection of code, with or without comments, written using a human-readable programming language, usually as plain text. The source code of a program is specially designed to facilitate the work of computer programmers, who specify the actions to be performed by a computer mostly by writing source code. The source code is often transformed by an assembler or compiler into binary machine code that can be executed by the computer. The machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed.
spiral model
Is a risk-driven software development process model. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping.
stack
Is an abstract data type that serves as a collection of elements, with two main principal operations:
  • push, which adds an element to the collection, and
  • pop, which removes the most recently added element that was not yet removed.
The order in which elements come off a stack gives rise to its alternative name, LIFO (last in, first out). Additionally, a peek operation may give access to the top without modifying the stack. The name "stack" for this type of structure comes from the analogy to a set of physical items stacked on top of each other. This structure makes it easy to take an item off the top of the stack, while getting to an item deeper in the stack may require taking off multiple other items first.
state
In information technology and computer science, a system is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system.
statement
In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements. A statement may have internal components (e.g., expressions).
storage
Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
stream
Is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches.
string
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is generally considered as a data type and is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. String may also denote more general arrays or other sequence (or list) data types and structures.
structured storage
A NoSQL (originally referring to "non-SQL" or "non-relational") database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Such databases have existed since the late 1960s, but the name "NoSQL" was only coined in the early 21st century, triggered by the needs of Web 2.0 companies. NoSQL databases are increasingly used in big data and real-time web applications. NoSQL systems are also sometimes called "Not only SQL" to emphasize that they may support SQL-like query languages or sit alongside SQL databases in polyglot-persistent architectures.
subroutine
In computer programming, a subroutine is a sequence of program instructions that performs a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed. Subroutines may be defined within programs, or separately in libraries that can be used by many programs. In different programming languages, a subroutine may be called a routine, subprogram, function, method, or procedure. Technically, these terms all have different definitions. The generic, umbrella term callable unit is sometimes used.
symbolic computation
In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols.
syntax
The syntax of a computer language is the set of rules that defines the combinations of symbols that are considered to be correctly structured statements or expressions in that language. This applies both to programming languages, where the document represents source code, and to markup languages, where the document represents data.
syntax error
Is an error in the syntax of a sequence of characters or tokens that is intended to be written in compile-time. A program will not compile until all syntax errors are corrected. For interpreted languages, however, a syntax error may be detected during program execution, and an interpreter's error messages might not differentiate syntax errors from errors of other kinds. There is some disagreement as to just what errors are "syntax errors". For example, some would say that the use of an uninitialized variable's value in Java code is a syntax error, but many others would disagree and would classify this as a (static) semantic error.
system console
The system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel.

T

technical documentation
In engineering, any type of documentation that describes handling, functionality, and architecture of a technical product or a product under development or use. The intended recipient for product technical documentation is both the (proficient) end user as well as the administrator/service or maintenance technician. In contrast to a mere "cookbook" manual, technical documentation aims at providing enough information for a user to understand inner and outer dependencies of the product at hand.
third-generation programming language
A third-generation programming language (3GL) is a high-level computer programming language that tends to be more machine-independent and programmer-friendly than the machine code of the first-generation and assembly languages of the second-generation, while having a less specific focus to the fourth and fifth generations. Examples of common and historical third-generation programming languages are ALGOL, BASIC, C, COBOL, Fortran, Java, and Pascal.
top-down and bottom-up design
tree
A widely used abstract data type (ADT) that simulates a hierarchical tree structure, with a root value and subtrees of children with a parent node, represented as a set of linked nodes.
type theory
In mathematics, logic, and computer science, a type theory is any of a class of formal systems, some of which can serve as alternatives to set theory as a foundation for all mathematics. In type theory, every "term" has a "type" and operations are restricted to terms of a certain type.

U

upload
In computer networks, to send data to a remote system such as a server or another client so that the remote system can store a copy. Contrast download.
Uniform Resource Locator (URL)

Colloquially web address.

A reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI), although many people use the two terms interchangeably. URLs occur most commonly to reference web pages (http), but are also used for file transfer (ftp), email (mailto), database access (JDBC), and many other applications.
user
Is a person who utilizes a computer or network service. Users of computer systems and software products generally lack the technical expertise required to fully understand how they work. Power users use advanced features of programs, though they are not necessarily capable of computer programming and system administration.
user agent
Software (a software agent) that acts on behalf of a user, such as a web browser that "retrieves, renders and facilitates end user interaction with Web content". An email reader is a mail user agent.
user interface (UI)
The space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology.
user interface design

Also user interface engineering.

The design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design).

V

variable
In computer programming, a variable, or scalar, is a storage location (identified by a memory address) paired with an associated symbolic name (an identifier), which contains some known or unknown quantity of information referred to as a value. The variable name is the usual way to reference the stored value, in addition to referring to the variable itself, depending on the context. This separation of name and content allows the name to be used independently of the exact information it represents. The identifier in computer source code can be bound to a value during run time, and the value of the variable may therefore change during the course of program execution.
virtual machine (VM)
An emulation of a computer system. Virtual machines are based on computer architectures and attempt to provide the same functionality as a physical computer. Their implementations may involve specialized hardware, software, or a combination of both.
V-Model
A software development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively.

W

waterfall model
A breakdown of project activities into linear sequential phases, where each phase depends on the deliverables of the previous one and corresponds to a specialisation of tasks. The approach is typical for certain areas of engineering design. In software development, it tends to be among the less iterative and flexible approaches, as progress flows in largely one direction ("downwards" like a waterfall) through the phases of conception, initiation, analysis, design, construction, testing, deployment and maintenance.
Waveform Audio File Format

Also WAVE or WAV due to its filename extension.

An audio file format standard, developed by Microsoft and IBM, for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in "chunks", and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Microsoft Windows systems for raw and typically uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format.
web crawler

Also spider, spiderbot, or simply crawler.

An Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering).
Wi-Fi
A family of wireless networking technologies, based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access. Wi‑Fi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing.

X

XHTML

Abbreviaton of eXtensible HyperText Markup Language.

Part of the family of XML markup languages. It mirrors or extends versions of the widely used HyperText Markup Language (HTML), the language in which web pages are formulated.
Contents: 

See also

References

  1. "Abstract Methods and Classes". oracle.com. Oracle Java Documentation. Retrieved 11 December 2014.
  2. Colburn, Timothy; Shute, Gary (2007-06-05). "Abstraction in Computer Science". Minds and Machines. 17 (2): 169–184. doi:10.1007/s11023-007-9061-7. ISSN 0924-6495. S2CID 5927969.
  3. ^ Kramer, Jeff (2007-04-01). "Is abstraction the key to computing?". Communications of the ACM. 50 (4): 36–42. CiteSeerX 10.1.1.120.6776. doi:10.1145/1232743.1232745. ISSN 0001-0782. S2CID 12481509.
  4. Comparison of Agent Architectures Archived August 27, 2008, at the Wayback Machine
  5. Collier, Ken W. (2011). Agile Analytics: A Value-Driven Approach to Business Intelligence and Data Warehousing. Pearson Education. pp. 121 ff. ISBN 9780321669544. What is a self-organizing team?
  6. "What is Agile Software Development?". Agile Alliance. 8 June 2013. Retrieved 4 April 2015.
  7. Goodrich, Michael T.; Tamassia, Roberto (2002), Algorithm Design: Foundations, Analysis, and Internet Examples, John Wiley & Sons, Inc., ISBN 978-0-471-38365-9
  8. "Application software". PC Magazine. Ziff Davis.
  9. Black, Paul E. (13 November 2008). "array". Dictionary of Algorithms and Data Structures. National Institute of Standards and Technology. Retrieved 22 August 2010.
  10. Bjoern Andres; Ullrich Koethe; Thorben Kroeger; Hamprecht (2010). "Runtime-Flexible Multi-dimensional Arrays and Views for C++98 and C++0x". arXiv:1008.2909 .
  11. Garcia, Ronald; Lumsdaine, Andrew (2005). "MultiArray: a C++ library for generic programming with arrays". Software: Practice and Experience. 35 (2): 159–188. doi:10.1002/spe.630. ISSN 0038-0644. S2CID 10890293.
  12. Definition of AI as the study of intelligent agents:
  13. Russell & Norvig 2009, p. 2.
  14. Goodrich, Michael T.; Tamassia, Roberto (2006), "9.1 The Map Abstract Data Type", Data Structures & Algorithms in Java (4th ed.), Wiley, pp. 368–371
  15. Mehlhorn, Kurt; Sanders, Peter (2008), "4 Hash Tables and Associative Arrays", Algorithms and Data Structures: The Basic Toolbox (PDF), Springer, pp. 81–98
  16. Douglas Comer, Computer Networks and Internets, page 99 ff, Prentice Hall 2008.
  17. Fred Halsall, to data+communications and computer networks, page 108, Addison-Wesley, 1985.
  18. Cisco Networking Academy Program: CCNA 1 and 2 companion guide, Volym 1–2, Cisco Academy 2003
  19. Behrouz A. Forouzan, Data communications and networking, McGraw-Hill, 2007
  20. Fleming, Philip J.; Wallace, John J. (1986-03-01). "How not to lie with statistics: the correct way to summarize benchmark results". Communications of the ACM. 29 (3): 218–221. doi:10.1145/5666.5673. ISSN 0001-0782. S2CID 1047380.
  21. Breur, Tom (July 2016). "Statistical Power Analysis and the contemporary "crisis" in social sciences". Journal of Marketing Analytics. 4 (2–3): 61–65. doi:10.1057/s41270-016-0001-3. ISSN 2050-3318.
  22. Bachmann, Paul (1894). Analytische Zahlentheorie [Analytic Number Theory] (in German). Vol. 2. Leipzig: Teubner.
  23. Landau, Edmund (1909). Handbuch der Lehre von der Verteilung der Primzahlen [Handbook on the theory of the distribution of the primes] (in German). Leipzig: B. G. Teubner. p. 883.
  24. Williams, Jr., Louis F. (22 April 1976). A modification to the half-interval search (binary search) method. Proceedings of the 14th ACM Southeast Conference. ACM. pp. 95–101. doi:10.1145/503561.503582. Archived from the original on 12 March 2017. Retrieved 29 June 2018.
  25. Knuth 1998, §6.2.1 ("Searching an ordered table"), subsection "Binary search".
  26. Butterfield & Ngondi 2016, p. 46.
  27. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) . Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. p. 39. ISBN 0-262-03384-4.
  28. Rowan Garnier; John Taylor (2009). Discrete Mathematics: Proofs, Structures and Applications, Third Edition. CRC Press. p. 620. ISBN 978-1-4398-1280-8.
  29. Steven S Skiena (2009). The Algorithm Design Manual. Springer Science & Business Media. p. 77. ISBN 978-1-84800-070-4.
  30. Mackenzie, Charles E. (1980). Coded Character Sets, History and Development (PDF). The Systems Programming Series (1 ed.). Addison-Wesley Publishing Company, Inc. p. x. ISBN 978-0-201-14460-4. LCCN 77-90165. Archived (PDF) from the original on May 26, 2016. Retrieved August 25, 2019.
  31. Gupta, Prakash C (2006). Data Communications and Computer Networks. PHI Learning. ISBN 9788120328464. Retrieved 10 July 2011.
  32. James D. Murray; William vanRyper (April 1996). Encyclopedia of Graphics File Formats (Second ed.). O'Reilly. bmp. ISBN 978-1-56592-161-0. Retrieved 2014-03-07.
  33. James D. Murray; William vanRyper (April 1996). Encyclopedia of Graphics File Formats (Second ed.). O'Reilly. os2bmp. ISBN 978-1-56592-161-0. Retrieved 2014-03-07.
  34. Gries, David; Schneider, Fred B. (1993), "Chapter 2. Boolean Expressions", A Logical Approach to Discrete Math, Monographs in Computer Science, Springer, p. 25ff, ISBN 9780387941158
  35. Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962), "4: Natural Data Units" (PDF), in Buchholz, Werner (ed.), Planning a Computer System – Project Stretch, McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA., pp. 39–40, LCCN 61-10466, archived (PDF) from the original on 2017-04-03, retrieved 2017-04-03, Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below.
    Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
    A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 [fr] computer.)
    Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program.
  36. Bemer, Robert William (1959), "A proposal for a generalized card code of 256 characters", Communications of the ACM, 2 (9): 19–23, doi:10.1145/368424.368435, S2CID 36115735
  37. Weik, Martin H. (1961). A Third Survey of Domestic Electronic Digital Computing Systems (Report). Ballistic Research Laboratory.
  38. Kuck, David (1978). Computers and Computations, Vol 1. John Wiley & Sons, Inc. p. 12. ISBN 978-0471027164.
  39. "Definition of CHARACTER". www.merriam-webster.com. Retrieved 1 April 2018.
  40. Gamma et al. 1995, p. 14.
  41. ^ Bruce 2002, 2.1 Objects, classes, and object types, https://books.google.com/books?id=9NGWq3K1RwUC&pg=PA18.
  42. Sadoski, Darleen. Client/Server Software Architectures – An Overview, Software Technology Roadmap, 1997-08-02. Retrieved on 2008-09-16.
  43. Mills, H.; M. Dyer; R. Linger (September 1987). "Cleanroom Software Engineering". IEEE Software. 4 (5): 19–25. CiteSeerX 10.1.1.467.2435. doi:10.1109/MS.1987.231413. S2CID 383170.
  44. Sussman and Steele. "Scheme: An interpreter for extended lambda calculus". "... a data structure containing a lambda expression, and an environment to be used when that lambda expression is applied to arguments." (Wikisource)
  45. ^ Shaun Bebbington (2014). "What is coding". Retrieved 2014-03-03.
  46. ^ Shaun Bebbington (2014). "What is programming". Retrieved 2014-03-03.
  47. Cognitive science is an interdisciplinary field of researchers from Linguistics, psychology, neuroscience, philosophy, computer science, and anthropology that seek to understand the mind. How We Learn: Ask the Cognitive Scientist
  48. Thagard, Paul, Cognitive Science, The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.).
  49. PC Mag Staff (28 February 2017). "Encyclopedia: Definition of Compiler". PCMag.com. Retrieved 28 February 2017.
  50. Computation from the Free Merriam-Webster Dictionary
  51. "Computation: Definition and Synonyms from Answers.com". Answers.com. Archived from the original on 22 February 2009. Retrieved 26 April 2017.
  52. "NIH working definition of bioinformatics and computational biology" (PDF). Biomedical Information Science and Technology Initiative. 17 July 2000. Archived from the original (PDF) on 5 September 2012. Retrieved 18 August 2012.
  53. "About the CCMB". Center for Computational Molecular Biology. Retrieved 18 August 2012.
  54. Melnik, Roderick, ed. (2015). Mathematical and Computational Modeling: With Applications in Natural and Social Sciences, Engineering, and the Arts. Wiley. ISBN 978-1-118-85398-6.
  55. Trappenberg, Thomas P. (2002). Fundamentals of Computational Neuroscience. United States: Oxford University Press Inc. p. 1. ISBN 978-0-19-851582-1.
  56. What is computational neuroscience? Patricia S. Churchland, Christof Koch, Terrence J. Sejnowski. in Computational Neuroscience pp.46-55. Edited by Eric L. Schwartz. 1993. MIT Press "Computational Neuroscience - the MIT Press". Archived from the original on 2011-06-04. Retrieved 2009-06-11.
  57. "Theoretical Neuroscience". The MIT Press. Archived from the original on 2018-05-31. Retrieved 2018-05-24.
  58. Gerstner, W.; Kistler, W.; Naud, R.; Paninski, L. (2014). Neuronal Dynamics. Cambridge, UK: Cambridge University Press. ISBN 9781107447615.
  59. Thijssen, Jos (2007). Computational Physics. Cambridge University Press. ISBN 978-0521833462.
  60. Clements, Alan. Principles of Computer Hardware (Fourth ed.). p. 1. Architecture describes the internal organization of a computer in an abstract way; that is, it defines the capabilities of the computer and its programming model. You can have two computers that have been constructed in different ways with different technologies but with the same architecture.
  61. Hennessy, John; Patterson, David. Computer Architecture: A Quantitative Approach (Fifth ed.). p. 11. This task has many aspects, including instruction set design, functional organization, logic design, and implementation.
  62. ^ Patterson, David A.; Hennessy, John L. (2005). Computer Organization and Design: The Hardware/Software Interface (3rd ed.). Amsterdam: Morgan Kaufmann Publishers. ISBN 1-55860-604-1. OCLC 56213091.
  63. Bynum, Terrell Ward. "A Very Short History of Computer Ethics". Southern Connecticut Wein University. Archived from the original on 2008-04-18. Retrieved 2011-01-05.
  64. Rochkind, Marc J. (2004). Advanced Unix Programming, Second Edition. Addison-Wesley. p. 1.1.2.
  65. "WordNet Search—3.1". Wordnetweb.princeton.edu. Retrieved 14 May 2012.
  66. Orsucci, Franco F.; Sala, Nicoletta (2008). Reflexing Interfaces: The Complex Coevolution of Information Technology Ecosystems, Information Science Reference. p. 335.
  67. Schatz, Daniel; Bashroush, Rabih; Wall, Julie (2017). "Towards a More Representative Definition of Cyber Security". Journal of Digital Forensics, Security and Law. 12 (2). ISSN 1558-7215.
  68. Dana H. Ballard; Christopher M. Brown (1982). Computer Vision. Prentice Hall. ISBN 0-13-165316-4.
  69. Huang, T. (1996-11-19). Vandoni, Carlo, E, ed. Computer Vision : Evolution And Promise (PDF). 19th CERN School of Computing. Geneva: CERN. pp. 21–25. doi:10.5170/CERN-1996-008.21. ISBN 978-9290830955.
  70. Milan Sonka; Vaclav Hlavac; Roger Boyle (2008). Image Processing, Analysis, and Machine Vision. Thomson. ISBN 0-495-08252-X.
  71. "Computing Curriculum 2020" (PDF).
  72. Lamport, Leslie (July 1978). "Time, Clocks, and the Ordering of Events in a Distributed System" (PDF). Communications of the ACM. 21 (7): 558–565. CiteSeerX 10.1.1.142.3682. doi:10.1145/359545.359563. S2CID 215822405. Retrieved 4 February 2016.
  73. Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. US National Institute of Standards and Technology.15 December 2004. Accessed 4 Oct 2011.
  74. Entry data structure in the Encyclopædia Britannica (2009) Online entry Accessed 4 Oct 2011.
  75. Sussman, Gerald Jay; Steele, Guy L. Jr. (December 1975). "Scheme: An interpreter for extended lambda calculus" . AI Memo. 349: 19. That is, in this continuation-passing programming style, a function always "returns" its result by "sending" it to another function. This is the key idea.
  76. Sussman, Gerald Jay; Steele, Guy L. Jr. (December 1998). "Scheme: A Interpreter for Extended Lambda Calculus" (reprint). Higher-Order and Symbolic Computation. 11 (4): 405–439. doi:10.1023/A:1010035624696. S2CID 18040106. We believe that this was the first occurrence of the term "continuation-passing style" in the literature. It has turned out to be an important concept in source code analysis and transformation for compilers and other metaprogramming tools. It has also inspired a set of other "styles" of program expression.
  77. "Frequently Asked Questions". Creative Commons. 4 August 2016. Retrieved 20 December 2011.
  78. Rivest, Ronald L. (1990). "Cryptography". In J. Van Leeuwen (ed.). Handbook of Theoretical Computer Science. Vol. 1. Elsevier.
  79. Bellare, Mihir; Rogaway, Phillip (21 September 2005). "Introduction". Introduction to Modern Cryptography. p. 10.
  80. Menezes, A.J.; van Oorschot, P.C.; Vanstone, S.A. (1997). Handbook of Applied Cryptography. Taylor & Francis. ISBN 978-0-8493-8523-0.
  81. Eric S. Raymond. "daemon". The Jargon File. Retrieved 2008-10-22.
  82. James Glanz (September 22, 2012). "Power, Pollution and the Internet". The New York Times. Retrieved 2012-09-25.
  83. ^ "Data Mining Curriculum". ACM SIGKDD. 2006-04-30. Retrieved 2014-01-27.
  84. Clifton, Christopher (2010). "Encyclopædia Britannica: Definition of Data Mining". Retrieved 2010-12-09.
  85. Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2009). "The Elements of Statistical Learning: Data Mining, Inference, and Prediction". Archived from the original on 2009-11-10. Retrieved 2012-08-07.
  86. Han, Jaiwei; Kamber, Micheline; Pei, Jian (2011). Data Mining: Concepts and Techniques (3rd ed.). Morgan Kaufmann. ISBN 978-0-12-381479-1.
  87. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). "From Data Mining to Knowledge Discovery in Databases" (PDF). Retrieved 17 December 2008.
  88. Dhar, V. (2013). "Data science and prediction". Communications of the ACM. 56 (12): 64–73. doi:10.1145/2500499. S2CID 6107147.
  89. Jeff Leek (2013-12-12). "The key word in "Data Science" is not Data, it is Science". Simply Statistics. Archived from the original on 2014-01-02. Retrieved 2018-12-05.
  90. Hayashi, Chikio (1998-01-01). "What is Data Science ? Fundamental Concepts and a Heuristic Example". In Hayashi, Chikio; Yajima, Keiji; Bock, Hans-Hermann; Ohsumi, Noboru; Tanaka, Yutaka; Baba, Yasumasa (eds.). Data Science, Classification, and Related Methods. Studies in Classification, Data Analysis, and Knowledge Organization. Springer Japan. pp. 40–51. doi:10.1007/978-4-431-65950-1_3. ISBN 9784431702085.
  91. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) . Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03384-4.
  92. Black, Paul E. (15 December 2004). "data structure". In Pieterse, Vreda; Black, Paul E. (eds.). Dictionary of Algorithms and Data Structures . National Institute of Standards and Technology. Retrieved 2018-11-06.
  93. "Data structure". Encyclopaedia Britannica. 17 April 2017. Retrieved 2018-11-06.
  94. Wegner, Peter; Reilly, Edwin D. (2003-08-29). Encyclopedia of Computer Science. Chichester, UK: John Wiley and Sons. pp. 507–512. ISBN 978-0470864128.
  95. type at the Free On-line Dictionary of Computing
  96. Shaffer, C. A. (2011). Data Structures & Algorithm Analysis in C++ (3rd ed.). Mineola, NY: Dover. 1.2. ISBN 978-0-486-48582-9.
  97. ^ "A declaration specifies the interpretation and attributes of a set of identifiers. A definition of an identifier is a declaration for that identifier that:
    • for an object , causes storage to be reserved for that object;
    • for a function, includes the function body;
    • for an enumeration constant, is the (only) declaration of the identifier;
    • for a typedef name, is the first (or only) declaration of the identifier."
    C11 specification, 6.7: Declarations, paragraph 5.
  98. Mike Banahan. "2.5. Declaration of variables". GBdirect. Retrieved 2011-06-08. declaration introduces just the name and type of something but allocates no storage.
  99. Stewart Robinson (2004). Simulation – The practice of model development and use. Wiley.
  100. ^ Coulouris, George; Jean Dollimore; Tim Kindberg; Gordon Blair (2011). Distributed Systems: Concepts and Design (5th ed.). Boston: Addison-Wesley. ISBN 978-0-132-14301-1.
  101. Bjørner, Dines (2006). "The Tryptych of Software Engineering". Software Engineering 3 – Domains, Requirements, and Software Design (book). Vol. I. Springer Verlag. p. 9. ISBN 978-3-540-33653-2. Retrieved 2016-12-19.
  102. "What is downloading? - Definition from WhatIs.com". SearchNetworkNexting. Archived from the original on 2019-09-05. Retrieved 2019-03-04.
  103. ^ Kessler, Gary (November 17, 2006). "An Overview of Cryptography". Princeton University.
  104. Vivek Gupta; Ethan Jackson; Shaz Qadeer; Sriram Rajamani (November 2012). "P: Safe Asynchronous Event-Driven Programming". Microsoft. Retrieved 20 February 2017.
  105. "executable". Merriam-Webster's Online Dictionary. Merriam-Webster. Retrieved 2008-07-19.
  106. Justis, R. T. & Kreigsmann, B. (1979). The feasibility study as a tool for venture analysis. Business Journal of Small Business Management 17 (1) 35-42.
  107. Georgakellos, D. A. & Marcis, A. M. (2009). Application of the semantic learning approach in the feasibility studies preparation training process. Information Systems Management 26 (3) 231–240.
  108. Young, G. I. M. (1970). Feasibility studies. Appraisal Journal 38 (3) 376-383.
  109. R. W. Butler (2001-08-06). "What is Formal Methods?". Retrieved 2006-11-16.
  110. C. Michael Holloway. Why Engineers Should Consider Formal Methods (PDF). 16th Digital Avionics Systems Conference (27–30 October 1997). Archived from the original (PDF) on 16 November 2006. Retrieved 2006-11-16.
  111. Sanghavi, Alok (May 21, 2010). "What is formal verification?". EE Times Asia.
  112. "Declaration vs. expression style - HaskellWiki".
  113. Myerson, Roger B. (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. 1. Chapter-preview links, pp. vii–xi.
  114. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) . Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 151–152. ISBN 0-262-03384-4.
  115. Black (ed.), Paul E. (2004-12-14). Entry for heap in Dictionary of Algorithms and Data Structures. Online version. U.S. National Institute of Standards and Technology, 14 December 2004. Retrieved on 2017-10-08 from https://xlinux.nist.gov/dads/HTML/heap.html.
  116. Skiena, Steven (2012). "Sorting and Searching". The Algorithm Design Manual. Springer. p. 109. doi:10.1007/978-1-84800-070-4_4. ISBN 978-1-84800-069-8. eapsort is nothing but an implementation of selection sort using the right data structure.
  117. ISO/IEC/IEEE International Standard - Systems and software engineering. ISO/IEC/IEEE 24765:2010(E). 2010. pp. vol., no., pp.1–418, 15 Dec. 2010.
  118. Martyn A Ould & Charles Unwin (ed), Testing in Software Development, BCS (1986), p71. Accessed 31 Oct 2014
  119. World Intellectual Property Organization (WIPO) (2016). Understanding Industrial Property. World Intellectual Property Organization. doi:10.34667/tind.28945. ISBN 9789280525885. Retrieved 2018-12-06.
  120. "Intellectual, industrial and commercial property | Fact Sheets on the European Union". European Parliament. Retrieved 2018-12-06.
  121. "What are intellectual property rights?". World Trade Organization. Retrieved 2016-05-23.
  122. "Intellectual property", Black's Law Dictionary, 10th ed. (2014).
  123. "Understanding Copyright and Related Rights" (PDF). World Intellectual Property Organization. p. 4. Retrieved 2018-12-06.
  124. "What is Intellectual Property?" (PDF). World Intellectual Property Organization (WIPO). Archived from the original (PDF) on 2020-02-06. Retrieved 2018-12-07.
  125. "Understanding Industrial Property" (PDF). World Intellectual Property Organization (WIPO). Retrieved 2018-12-07.
  126. Anderson, Michael; Anderson, Susan Leigh (2007-12-15). "Machine Ethics: Creating an Ethical Intelligent Agent". AI Magazine. 28 (4): 15. doi:10.1609/aimag.v28i4.2065. ISSN 2371-9621. S2CID 17033332.
  127. According to the definition given by Russell & Norvig (2003, chpt. 2)
  128. Hookway, B. (2014). "Chapter 1: The Subject of the Interface". Interface. MIT Press. pp. 1–58. ISBN 9780262525503.
  129. IEEE 100 - The Authoritative Dictionary Of IEEE Standards Terms. NYC, NY, USA: IEEE Press. 2000. pp. 574–575. ISBN 9780738126012.
  130. Dunham, Ken; Melnick, Jim (2008). Malicious Bots: An Inside Look into the Cyber-Criminal Underground of the Internet. CRC Press. ISBN 9781420069068.
  131. Gosling et al. 2014, p. 1.
  132. "Java is pure object oriented or not?". Stack Overflow. Retrieved 2019-05-24.
  133. "Write once, run anywhere?". Computer Weekly. May 2, 2002. Retrieved 2009-07-27.
  134. "1.2 Design Goals of the Java™ Programming Language". Oracle. January 1, 1999. Archived from the original on January 23, 2013. Retrieved January 14, 2013.
  135. Knuth 1998, §6.1 ("Sequential search").
  136. IBM Corporation (1972). IBM OS Linkage Editor and Loader (PDF).
  137. Abelson, Harold; Sussman, Gerald Jay (1996). Structure and Interpretation of Computer Programs. MIT Press.
  138. The definition "without being explicitly programmed" is often attributed to Arthur Samuel, who coined the term "machine learning" in 1959, but the phrase is not found verbatim in this publication, and may be a paraphrase that appeared later. Confer "Paraphrasing Arthur Samuel (1959), the question is: How can computers learn to solve problems without being explicitly programmed?" in Koza, John R.; Bennett, Forrest H.; Andre, David; Keane, Martin A. (1996). Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. Artificial Intelligence in Design '96. Springer, Dordrecht. pp. 151–170. doi:10.1007/978-94-009-0279-4_9.
  139. < Bishop, C. M. (2006), Pattern Recognition and Machine Learning, Springer, ISBN 978-0-387-31073-2
  140. Undergraduate texts include Boolos, Burgess, and Jeffrey (2002), Enderton (2001), and Mendelson (1997). A classic graduate text by Shoenfield (2001) first appeared in 1967.
  141. Equivalently, table.
  142. Anton (1987, p. 23)
  143. Beauregard & Fraleigh (1973, p. 56)
  144. Knuth (1998, p. 158)
  145. Katajainen, Jyrki; Träff, Jesper Larsson (March 1997). "A meticulous analysis of mergesort programs" (PDF). Proceedings of the 3rd Italian Conference on Algorithms and Complexity. Italian Conference on Algorithms and Complexity. Rome. pp. 217–228. CiteSeerX 10.1.1.86.3154. doi:10.1007/3-540-62592-5_74.
  146. Consumers of an object may consist of various kinds of elements, such as other programs, remote computer systems, or computer programmers who wish to utilize the object as part of their own programs.
  147. ^ Centers for Medicare & Medicaid Services (CMS) Office of Information Service (2008). Selecting a development approach. Webarticle. United States Department of Health and Human Services (HHS). Re-validated: March 27, 2008. Retrieved 27 Oct 2008.
  148. Oppel, Andy (2005). SQL Demystified. McGraw Hill. p. 7. ISBN 0-07-226224-9.
  149. "Compiler". TechTarget. Retrieved 1 September 2011. Traditionally, the output of the compilation has been called object code or sometimes an object module.
  150. Aho, Alfred V.; Sethi, Ravi; Ullman, Jeffrey D. (1986). "10 Code Optimization". Compilers: principles, techniques, and tools. Computer Science. Mark S. Dalton. p. 704. ISBN 0-201-10194-7.
  151. Kindler, E.; Krivy, I. (2011). "Object-Oriented Simulation of systems with sophisticated control". International Journal of General Systems. 40 (3): 313–343. doi:10.1080/03081079.2010.539975. S2CID 205549734.
  152. Lewis, John; Loftus, William (2008). Java Software Solutions Foundations of Programming Design 6th ed. Pearson Education Inc. ISBN 978-0-321-53205-3., section 1.6 "Object-Oriented Programming"
  153. St. Laurent, Andrew M. (2008). Understanding Open Source and Free Software Licensing. O'Reilly Media. p. 4. ISBN 9780596553951.
  154. Levine, Sheen S.; Prietula, Michael J. (2013-12-30). "Open Collaboration for Innovation: Principles and Performance". Organization Science. 25 (5): 1414–1433. arXiv:1406.7541. doi:10.1287/orsc.2013.0872. ISSN 1047-7039. S2CID 6583883.
  155. "Optical Fiber". www.thefoa.org. The Fiber Optic Association. Retrieved 17 April 2015.
  156. Senior, John M.; Jamro, M. Yousif (2009). Optical fiber communications: principles and practice. Pearson Education. pp. 7–9. ISBN 978-0130326812.
  157. Williams, Laurie (February 19–20, 2001). Integrating pair programming into a software development process. 14th Conference on Software Engineering Education and Training. Charlotte. pp. 27–36. doi:10.1109/CSEE.2001.913816. ISBN 0-7695-1059-0. One of the programmers, the driver, has control of the keyboard/mouse and actively implements the program. The other programmer, the observer, continuously observes the work of the driver to identify tactical (syntactic, spelling, etc.) defects, and also thinks strategically about the direction of the work.
  158. Gottlieb, Allan; Almasi, George S. (1989). Highly parallel computing. Redwood City, Calif.: Benjamin/Cummings. ISBN 978-0-8053-0177-9.
  159. Prata, Stephen (2004). C primer plus (5th ed.). Sams. pp. 276–277. ISBN 978-0-672-32696-7.
  160. "Working Draft, Standard for Programming Language C++" (PDF). www.open-std.org. Retrieved 1 January 2018.
  161. Gordon, Aaron. "Subprograms and Parameter Passing". rowdysites.msudenver.edu/~gordona. Archived from the original on 1 January 2018. Retrieved 1 January 2018.
  162. ^ U.S. Election Assistance Commission (2007). "Definitions of Words with Special Meanings". Voluntary Voting System Guidelines. Archived from the original on 2012-12-08. Retrieved 2013-01-14.
  163. Ranta, Aarne (9 May 2012). Implementing Programming Languages (PDF). College Publications. pp. 16–18. ISBN 9781848900646. Retrieved 22 March 2020.
  164. Clocksin, William F.; Mellish, Christopher S. (2003). Programming in Prolog. Berlin ; New York: Springer-Verlag. ISBN 978-3-540-00678-7.
  165. Bratko, Ivan (2012). Prolog programming for artificial intelligence (4th ed.). Harlow, England ; New York: Addison Wesley. ISBN 978-0-321-41746-6.
  166. Covington, Michael A. (1994). Natural language processing for Prolog programmers. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-629213-5.
  167. Lloyd, J. W. (1984). Foundations of logic programming. Berlin: Springer-Verlag. ISBN 978-3-540-13299-8.
  168. Kuhlman, Dave. "A Python Book: Beginning Python, Advanced Python, and Python Exercises". Section 1.1. Archived from the original (PDF) on 23 June 2012.
  169. The National Academies of Sciences, Engineering, and Medicine (2019). Grumbling, Emily; Horowitz, Mark (eds.). Quantum Computing : Progress and Prospects (2018). Washington, D.C.: National Academies Press. p. I-5. doi:10.17226/25196. ISBN 978-0-309-47969-1. OCLC 1081001288. S2CID 125635007.
  170. R language and environment
    • Hornik, Kurt (2017-10-04). "R FAQ". The Comprehensive R Archive Network. 2.1 What is R?. Retrieved 2018-08-06.
    R Foundation
    • Hornik, Kurt (2017-10-04). "R FAQ". The Comprehensive R Archive Network. 2.13 What is the R Foundation?. Retrieved 2018-08-06.
    The R Core Team asks authors who use R in their data analysis to cite the software using:
    • R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.
  171. widely used
  172. Vance, Ashlee (2009-01-06). "Data Analysts Captivated by R's Power". New York Times. Retrieved 2018-08-06. R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca...
  173. "Computer Science Dictionary Definitions". Computing Students. Retrieved Jan 22, 2018.
  174. Radványi, Tibor (2014). Database Management Systems. Eszterházy Károly College. p. 19. Retrieved 23 September 2018.
  175. Kahate, Atul (2006). Introduction to Database Management Systems. Pearson. p. 3. ISBN 978-81-317-0078-5. Retrieved 23 September 2018.
  176. Connolly, Thomas (2004). Database Solutions: A Step by Step Guide to Building Databases (2nd ed.). Pearson. p. 7. ISBN 978-0-321-17350-8.
  177. Pezzè, Mauro; Young, Michal (2008). Software testing and analysis: process, principles, and techniques. Wiley. ISBN 978-81-265-1773-2. Testing activities that focus on regression problems are called (non) regression testing. Usually "non" is omitted
  178. Basu, Anirban (2015). Software Quality Assurance, Testing and Metrics. PHI Learning. ISBN 978-81-203-5068-7.
  179. National Research Council Committee on Aging Avionics in Military Aircraft: Aging Avionics in Military Aircraft. The National Academies Press, 2001, page 2: ″Each technology-refresh cycle requires regression testing.″
  180. Boulanger, Jean-Louis (2015). CENELEC 50128 and IEC 62279 Standards. Wiley. ISBN 978-1119122487.
  181. Codd, E. F. (1970). "A Relational Model of Data for Large Shared Data Banks". Communications of the ACM. 13 (6): 377–387. doi:10.1145/362384.362685.
  182. Ambler, Scott (21 March 2023). "Relational Databases 101: Looking at the Whole Picture".
  183. Institute of Electrical and Electronics Engineers (1990) IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York, NY ISBN 1-55937-079-3
  184. Kotonya, Gerald; Sommerville, Ian (1998). Requirements Engineering: Processes and Techniques. Chichester, UK: John Wiley and Sons. ISBN 9780471972082.
  185. Ueberhuber, Christoph W. (1997), Numerical Computation 1: Methods, Software, and Analysis, Springer, pp. 139–146, ISBN 978-3-54062058-7
  186. Forrester, Dick (2018). Math/Comp241 Numerical Methods (lecture notes). Dickinson College.
  187. Aksoy, Pelin; DeNardis, Laura (2007), Information Technology in Theory, Cengage Learning, p. 134, ISBN 978-1-42390140-2
  188. Ralston, Anthony; Rabinowitz, Philip (2012), A First Course in Numerical Analysis, Dover Books on Mathematics (2nd ed.), Courier Dover Publications, pp. 2–4, ISBN 978-0-48614029-2
  189. Butt, Rizwan (2009), Introduction to Numerical Analysis Using MATLAB, Jones & Bartlett Learning, pp. 11–18, ISBN 978-0-76377376-2
  190. "Overview Of Key Routing Protocol Concepts: Architectures, Protocol Types, Algorithms and Metrics". Tcpipguide.com. Archived from the original on 20 December 2010. Retrieved 15 January 2011.
  191. Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987): Concurrency Control and Recovery in Database Systems (free PDF download), Addison Wesley Publishing Company, ISBN 0-201-10715-5
  192. Gerhard Weikum, Gottfried Vossen (2001): Transactional Information Systems, Elsevier, ISBN 1-55860-508-8
  193. Maurice Herlihy and J. Eliot B. Moss. Transactional memory: architectural support for lock-free data structures. Proceedings of the 20th annual international symposium on Computer architecture (ISCA '93). Volume 21, Issue 2, May 1993.
  194. Marshall Cline. "C++ FAQ: "What's this "serialization" thing all about?"". Archived from the original on 2015-04-05. It lets you take an object or group of objects, put them on a disk or send them through a wire or wireless transport mechanism, then later, perhaps on another computer, reverse the process, resurrecting the original object(s). The basic mechanisms are to flatten object(s) into a one-dimensional stream of bits, and to turn that stream of bits back into the original object(s).
  195. Kearney, K.T.; Torelli, F. (2011). "The SLA Model". In Wieder, P.; Butler, J.M.; Theilmann, W.; Yahyapour, R. (eds.). Service Level Agreements for Cloud Computing. Springer Science+Business Media, LLC. pp. 43–68. ISBN 9781461416142.
  196. Nwana, H. S. (1996). "Software Agents: An Overview". Knowledge Engineering Review. 21 (3): 205–244. CiteSeerX 10.1.1.50.660. doi:10.1017/s026988890000789x. S2CID 7839197.
  197. Schermer, B. W. (2007). Software agents, surveillance, and the right to privacy: A legislative framework for agent-enabled surveillance (paperback). Vol. 21. Leiden University Press. pp. 140, 205–244. hdl:1887/11951. ISBN 978-0-596-00712-6. Retrieved 2012-10-30.
  198. SWEBOK Pierre Bourque; Robert Dupuis; Alain Abran; James W. Moore, eds. (2004). "Chapter 4: Software Construction". Guide to the Software Engineering Body of Knowledge. IEEE Computer Society. pp. 4–1–4–5. ISBN 0-7695-2330-7. Archived from the original on 2014-07-14. Retrieved 2020-06-21.
  199. Roger S. Pressman Software engineering: a practitioner's approach (eighth edition)
  200. Ralph, P. and Wand, Y. (2009). A proposal for a formal definition of the design concept. In Lyytinen, K., Loucopoulos, P., Mylopoulos, J., and Robinson, W., editors, Design Requirements Workshop (LNBIP 14), pp. 103–136. Springer-Verlag, p. 109 doi:10.1007/978-3-540-92966-6_6.
  201. Freeman, Peter; David Hart (2004). "A Science of design for software-intensive systems". Communications of the ACM. 47 (8): 19–21 . doi:10.1145/1012037.1012054. S2CID 14331332.
  202. "Application Development (AppDev) Defined and Explained". Bestpricecomputers.co.uk. 2007-08-13. Retrieved 2012-08-05.
  203. DRM Associates (2002). "New Product Development Glossary". Retrieved 2006-10-29.
  204. Abran et al. 2004, pp. 1–1
  205. ACM (2007). "Computing Degrees & Careers". ACM. Archived from the original on 2011-06-17. Retrieved 2010-11-23.
  206. Laplante, Phillip (2007). What Every Engineer Should Know about Software Engineering. Boca Raton: CRC. ISBN 978-0-8493-7228-5. Retrieved 2011-01-21.
  207. "The Joint Task Force for Computing Curricula 2005" (PDF). 2014-10-21. Archived (PDF) from the original on 2014-10-21. Retrieved 2020-04-16.
  208. "ISO/IEC 14764:2006 Software Engineering — Software Life Cycle Processes — Maintenance". Iso.org. 2011-12-17. Retrieved 2013-12-02.
  209. Kaner, Cem (November 17, 2006). Exploratory Testing (PDF). Quality Assurance Institute Worldwide Annual Software Testing Conference. Orlando, FL. Retrieved November 22, 2014.
  210. "Programming in C: A Tutorial" (PDF). Archived from the original(PDF) on 23 February 2015.
  211. By contrast, a simple QUEUE operates FIFO (first in, first out).
  212. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) . Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 232–233. ISBN 0-262-03384-4.
  213. "What is stateless? - Definition from WhatIs.com". techtarget.com.
  214. "statement". webopedia. September 1996. Retrieved 2015-03-03.
  215. "NOSQL Databases". Archived from the original on 2018-12-26. NoSQL DEFINITION: Next Generation Databases mostly addressing some of the points : being non-relational, distributed, open-source and horizontally scalab
  216. Leavitt, Neal (2010). "Will NoSQL Databases Live Up to Their Promise?" (PDF). IEEE Computer. 43 (2): 12–14. doi:10.1109/MC.2010.58. S2CID 26876882.
  217. Mohan, C. (2013). History Repeats Itself: Sensible and NonsenSQL Aspects of the NoSQL Hoopla (PDF). Proc. 16th Int'l Conf. on Extending Database Technology.
  218. "Amazon Goes Back to the Future With 'NoSQL' Database". WIRED. 2012-01-19. Retrieved 2017-03-06.
  219. "RDBMS dominate the database market, but NoSQL systems are catching up". DB-Engines.com. 21 Nov 2013. Retrieved 24 Nov 2013.
  220. "NoSQL (Not Only SQL)". NoSQL database, also called Not Only SQL
  221. Fowler, Martin. "NosqlDefinition". many advocates of NoSQL say that it does not mean a "no" to SQL, rather it means Not Only SQL
  222. "ACM Association in computer algebra".
  223. Issue of syntax or semantics?
  224. John Paul Mueller,Semantic Errors in Java
  225. What is "technical documentation"? at Transcom.de. Accessed February 25, 2013.
  226. What is Technical Documentation? Archived 2013-04-18 at archive.today at Tetras Translations. Accessed February 25, 2013.
  227. Documenting the New System at IGCSE ICT. Accessed February 25, 2013.
  228. "Computer Hope, Generation languages"
  229. "Upload Definition". techterms.com. Retrieved 2017-03-30.
  230. W3C (2009).
  231. "Forward and Backslashes in URLs". zzz.buzz. Retrieved 2018-09-19.
  232. RFC 3986 (2005).
  233. ^ Joint W3C/IETF URI Planning Interest Group (2002).
  234. RFC 2396 (1998).
  235. Miessler, Daniel (17 April 2024). "The Difference Between URLs and URIs".
  236. Jargon File entry for "User". Retrieved November 7, 2010.
  237. "W3C Definition of User Agent". www.w3.org. 16 June 2011. Retrieved 2018-10-20.
  238. Aho, Alfred V.; Sethi, Ravi; Ullman, Jeffrey D. (1986), Compilers: Principles, Techniques, and Tools, pp. 26–28, Bibcode:1986cptt.book.....A
  239. Knuth, Donald (1997). The Art of Computer Programming. Vol. 1 (3rd ed.). Reading, Massachusetts: Addison-Wesley. pp. 3–4. ISBN 0-201-89683-4.
  240. Kevin Forsberg and Harold Mooz, "The Relationship of System Engineering to the Project Cycle", in Proceedings of the First Annual Symposium of National Council on System Engineering, October 1991: 57–65.
  241. Beal, Vangie (2 May 2001). "What is Wi-Fi (IEEE 802.11x)? A Webopedia Definition". Webopedia. Archived from the original on 2012-03-08.
  242. Schofield, Jack (21 May 2007). "The dangers of Wi-Fi radiation (updated)". The Guardian – via www.theguardian.com.
  243. "Certification | Wi-Fi Alliance". www.wi-fi.org.

Works cited

Notes

  1. The function may be stored as a reference to a function, such as a function pointer.
  2. In this article, the term "subroutine" refers to any subroutine-like construct, which have different names and slightly different meanings depending on the programming language being discussed.
  3. A URL implies the means to access an indicated resource and is denoted by a protocol or an access mechanism, which is not true of every URI. Thus http://www.example.com is a URL, while www.example.com is not.
Computer science
Note: This template roughly follows the 2012 ACM Computing Classification System.
Hardware
Computer systems organization
Networks
Software organization
Software notations and tools
Software development
Theory of computation
Algorithms
Mathematics of computing
Information systems
Security
Human–computer interaction
Concurrency
Artificial intelligence
Machine learning
Graphics
Applied computing
Software engineering
Fields
Concepts
Orientations
Models
Developmental
Other
Languages
Related fields
Glossaries of computers
Glossaries of science and engineering
Categories: