Misplaced Pages

Digital data

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Digital input)

Discrete, discontinuous representation of information This article is about the concept in information theory and information systems. For the electronics concept, see Digital signal. For other uses, see Digital.
Digital clock. The time shown by the digits on the face at any instant is digital data. The actual precise time is analog data.

Digital data, in information theory and information systems, is information represented as a string of discrete symbols, each of which can take on one of only a finite number of values from some alphabet, such as letters or digits. An example is a text document, which consists of a string of alphanumeric characters. The most common form of digital data in modern information systems is binary data, which is represented by a string of binary digits (bits) each of which can have one of two values, either 0 or 1.

Digital data can be contrasted with analog data, which is represented by a value from a continuous range of real numbers. Analog data is transmitted by an analog signal, which not only takes on continuous values but can vary continuously with time, a continuous real-valued function of time. An example is the air pressure variation in a sound wave.

The word digital comes from the same source as the words digit and digitus (the Latin word for finger), as fingers are often used for counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word digital in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942. The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography.

Symbol to digital conversion

This section possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (August 2016) (Learn how and when to remove this message)

Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used.

A symbol input device usually consists of a group of switches that are polled at regular intervals to see which switches are switched. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt, in a specialized format, so that the CPU can read it.

For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word.

Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded or converted into a number based on the status of modifier keys and the desired character encoding.

A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard.

It is estimated that in the year 1986, less than 1% of the world's technological capacity to store information was digital and in 2007 it was already 94%. The year 2002 is assumed to be the year when humankind was able to store more information in digital than in analog format (the "beginning of the digital age").

States

Digital data come in these three states: data at rest, data in transit, and data in use. The confidentiality, integrity, and availability have to be managed during the entire lifecycle from 'birth' to the destruction of the data.

Properties of digital information

All digital information possesses common properties that distinguish it from analog data with respect to communications:

  • Synchronization: Since digital information is conveyed by the sequence in which symbols are ordered, all digital schemes have some method for determining the beginning of a sequence. In written or spoken human languages, synchronization is typically provided by pauses (spaces), capitalization, and punctuation. Machine communications typically use special synchronization sequences.
  • Language: All digital communications require a formal language, which in this context consists of all the information that the sender and receiver of the digital communication must both possess, in advance, for the communication to be successful. Languages are generally arbitrary and specify the meaning to be assigned to particular symbol sequences, the allowed range of values, methods to be used for synchronization, etc.
  • Errors: Disturbances (noise) in analog communications invariably introduce some, generally small deviation or error between the intended and actual communication. Disturbances in digital communication only result in errors when the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturbing the sequence of symbols. It is generally possible to have near-error-free digital communication. Further, techniques such as check codes may be used to detect errors and correct them through redundancy or re-transmission. Errors in digital communications can take the form of substitution errors, in which a symbol is replaced by another symbol, or insertion/deletion errors, in which an extra incorrect symbol is inserted into or deleted from a digital message. Uncorrected errors in digital communications have an unpredictable and generally large impact on the information content of the communication.
  • Copying: Because of the inevitable presence of noise, making many successive copies of an analog communication is infeasible because each generation increases the noise. Because digital communications are generally error-free, copies of copies can be made indefinitely.
  • Granularity: The digital representation of a continuously variable analog value typically involves a selection of the number of symbols to be assigned to that value. The number of symbols determines the precision or resolution of the resulting datum. The difference between the actual analog value and the digital representation is known as quantization error. For example, if the actual temperature is 23.234456544453 degrees, but only two digits (23) are assigned to this parameter in a particular digital representation, the quantizing error is 0.234456544453. This property of digital communication is known as granularity.
  • Compressible: According to Miller, "Uncompressed digital data is very large, and in its raw form, it would actually produce a larger signal (therefore be more difficult to transfer) than analog data. However, digital data can be compressed. Compression reduces the amount of bandwidth space needed to send information. Data can be compressed, sent, and then decompressed at the site of consumption. This makes it possible to send much more information and results in, for example, digital television signals offering more room on the airwave spectrum for more television channels."

Historical digital systems

Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic.

  • DNA genetic code is a naturally occurring form of digital data storage.
  • Written text (due to the limited character set and the use of discrete symbols – the alphabet in most cases)
  • The abacus was created sometime between 1000 BC and 500 BC, it later became a form of calculation frequency. Nowadays it can be used as a very advanced, yet basic digital calculator that uses beads on rows to represent numbers. Beads only have meaning in discrete up and down states, not in analog in-between states.
  • A beacon is perhaps the simplest non-electronic digital signal, with just two states (on and off). In particular, smoke signals are one of the oldest examples of a digital signal, where an analog "carrier" (smoke) is modulated with a blanket to generate a digital signal (puffs) that conveys information.
  • Morse code uses six digital states—dot, dash, intra-character gap (between each dot or dash), short gap (between each letter), medium gap (between words), and long gap (between sentences)—to send messages via a variety of potential carriers such as electricity or light, for example using an electrical telegraph or a flashing light.
  • The Braille uses a six-bit code rendered as dot patterns.
  • Flag semaphore uses rods or flags held in particular positions to send messages to the receiver watching them some distance away.
  • International maritime signal flags have distinctive markings that represent letters of the alphabet to allow ships to send messages to each other.
  • More recently invented, a modem modulates an analog "carrier" signal (such as sound) to encode binary electrical digital information, as a series of binary digital sound pulses. A slightly earlier, surprisingly reliable version of the same concept was to bundle a sequence of audio digital "signal" and "no signal" information (i.e. "sound" and "silence") on magnetic cassette tape for use with early home computers.

See also

References

  1. Ceruzzi, Paul E (29 June 2012). Computing: A Concise History. MIT Press. ISBN 978-0-262-51767-6.
  2. Heinrich, Lutz J.; Heinzl, Armin; Roithmayr, Friedrich (29 August 2014). Wirtschaftsinformatik-Lexikon (in German). Walter de Gruyter GmbH & Co KG. ISBN 978-3-486-81590-0.
  3. Martin Hilbert; Priscila López (10 February 2011). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science. Vol. 332, no. 6025. pp. 60–65. doi:10.1126/science.1200970. Archived (PDF) from the original on 31 May 2011. Also "Supporting online material for The World's Technological Capacity to Store, Communicate, and Compute Information" (PDF). Science. doi:10.1126/science.1200970. Archived (PDF) from the original on 31 May 2011. Free access to the article through here: www.martinhilbert.net/WorldInfoCapacity.html/
  4. "video animation on The World's Technological Capacity to Store, Communicate, and Compute Information from 1986 to 2010". 11 June 2011. Archived from the original on 21 February 2013. Retrieved 6 November 2013 – via YouTube.
  5. ^ Miller, Vincent (2011). Understanding digital culture. London: Sage Publications. sec. "Convergence and the contemporary media experience". ISBN 978-1-84787-497-9.
  6. "The three states of information". The University of Edinburgh. Archived from the original on 14 April 2021. Retrieved 21 February 2021.

Further reading

  • Tocci, R. 2006. Digital Systems: Principles and Applications (10th Edition). Prentice Hall. ISBN 0-13-172579-3
Digital electronics
Components
Theory
Design
Applications
Design issues
Categories: