- A means of storing a collection of data. Computer science is in part the study of methods for effectively using a computer to solve problems, or in other words, determining exactly the problem to be solved. This process entails (1) gaining an understanding of the problem; (2) translating vague descriptions, goals, and contradictory requests, and often unstated desires, into a precisely formulated conceptual solution; and (3) implementing the solution with a computer program. This solution typically consists of two parts: algorithms and data structures.
- The physical layout of data. Data fields, memo fields, fixed length fields, variable length fields, records, word processing documents, spreadsheets, data files, database files and indexes are all examples of data structures.
- In Geographic Information Systems, a representation of the data model as a diagram, list, or array, showing the human implementation orientation of the data.
- Way in which data are stored for efficient search and retrieval. The simplest data structure is the one-dimensional (linear) array, in which stored elements are numbered with consecutive integers and contents are accessed by these numbers. Data items stored nonconsecutively in memory may be linked by pointers (memory addresses stored with items to indicate where the "next" item or items in the structure are located). Many algorithms have been developed for sorting data efficiently; these apply to structures residing in main memory and also to structures that constitute information systems and databases.
- A data structure in computer science is a way of storing data in a computer so that it can be used efficiently. It is an organization of mathematical and logical concepts of data. Often a carefully chosen data structure will allow the most efficient algorithm to be used. The choice of the data structure often begins from the choice of an abstract data type. A well-designed data structure allows a variety of critical operations to be performed, using as few resources, both execution time and memory space, as possible. Data structures are implemented by a programming language as data types and the references and operations they provide.
- In programming, the term data structure refers to a scheme for organizing related pieces of information. The basic types of data structures include:
· files
- lists
· arrays
· trees
Each of these basic structures has many variations and allows different operations to be performed on the data.
7. A data structure is a specialized format for organizing and storing data. General data structure types include the array, the file, the record, the table, the tree, and so on. Any data structure is designed to organize data to suit a specific purpose so that it can be accessed and worked with in appropriate ways. In computer programming, a data structure may be selected or designed to store data for the purpose of working on it with various algorithms.
- data structures is a ways of arranging information in the memory of a computer. In computer programming, it is often necessary to store large numbers of items in such a manner as to reflect a relationship between them. The three basic ways of doing this are the following:
- · Data structures and methods that are optimized for fast retrieval of network connectivity A way to efficiently store network data structures to a permanent disk file Ready-to-use algorithms such as the shortest path algorithm Support for some advanced modeling concepts that facilitate modeling hierarchical and multimodal transportation networks
- Set of structural metadata associated to a data set, which include information about how concepts are associated with the measures, dimensions, and attributes of a data cube, along with information about the representation of data and related descriptive metadata.
I. GRAPH
In computer science, a graph is a kind of data structure, specifically an abstract data type (ADT), that consists of a set of nodes (also called vertices) and a set of edges that establish relationships (connections) between the nodes. The graph ADT follows directly from the graph concept from mathematics.
Informally, G=(V,E) consists of vertices, the elements of V, which are connected by edges, the elements of E. Formally, a graph, G, is defined as an ordered pair, G=(V,E), where V is a set (usually finite) and E is a set consisting of two element subsets of V.
Choices of representation
Two main data structures for the representation of graphs are used in practice. The first is called an adjacency list, and is implemented by representing each node as a data structure that contains a list of all adjacent nodes. The second is an adjacency matrix, in which the rows and columns of a two-dimensional array represent source and destination vertices and entries in the array indicate whether an edge exists between the vertices. Adjacency lists are preferred for sparse graphs; otherwise, an adjacency matrix is a good choice. Finally, for very large graphs with some regularity in the placement of edges, a symbolic graph is a possible choice of representation.
Comparison with other data structures
Graph data structures are non-hierarchical and therefore suitable for data sets where the individual elements are interconnected in complex ways. For example, a computer network can be modeled with a graph.
Hierarchical data sets can be represented by a binary or nonbinary tree. It is worth mentioning, however, that trees can be seen as a special form of graph.
Operations
Graph algorithms are a significant field of interest within computer science. Typical operations associated with graphs are: finding a path between two nodes, like depth-first search and breadth-first search and finding the shortest path from one node to another, like Dijkstra's algorithm. A solution to finding the shortest path from each node to every other node also exists in the form of the Floyd-Warshall algorithm.
A directed graph can be seen as a flow network, where each edge has a capacity and each edge receives a flow. The Ford-Fulkerson algorithm is used to find out the maximum flow from a source to a sink in a graph.
The graphs can be represented in two ways. One is adjacency matrix and adjacency list.
For example, let us consider the following graph
A----------->B
| ^
| |
| |
V |
C ------------
Adjacency Matrix
A B C
A 0 1 1
B 0 0 0
C 0 1 0
Adjacency List
A ----> | B | ----> | C | ---- NULL
B ----> ---- NULL
C ----> | B | ---- NULL
In computer science, a stack is an abstract data type and data structure based on the principle of Last In First Out (LIFO). Stacks are used extensively at every level of a modern computer system. For example, a modern PC uses stacks at the architecture level, which are used in the basic design of an operating system for interrupt handling and operating system function calls. Among other uses, stacks are used to run a Java Virtual Machine, and the Java language itself has a class called "Stack", which can be used by the programmer. The stack is ubiquitous.
A stack-based computer system is one that stores temporary information primarily in stacks, rather than hardware CPU registers (a register-based computer system).
Basic architecture of a stack
A typical stack, storing local data and call information for nested procedures. This stack grows downward from its origin. The stack pointer points to the current topmost datum on the stack. A push operation decrements the pointer and copies the data to the stack; a pop operation copies data from the stack and then increments the pointer. Each procedure called in the program stores procedure return information (in yellow) and local data (in other colors) by pushing them onto the stack. This type of stack implementation is extremely common, but it is vulnerable to buffer overflow attacks (see the text).
A typical stack is an area of computer memory with a fixed origin and a variable size. Initially the size of the stack is zero. A stack pointer, usually in the form of a hardware register, points to the most recently referenced location on the stack; when the stack has a size of zero, the stack pointer points to the origin of the stack.
The two operations applicable to all stacks are:
- a push operation, in which a data item is placed at the location pointed to by the stack pointer, and the address in the stack pointer is adjusted by the size of the data item;
- a pop or pull operation: a data item at the current location pointed to by the stack pointer is removed, and the stack pointer is adjusted by the size of the data item.
There are many variations on the basic principle of stack operations. Every stack has a fixed location in memory at which it begins. As data items are added to the stack, the stack pointer is displaced to indicate the current extent of the stack, which expands away from the origin (either up or down, depending on the specific implementation).
For example, a stack might start at a memory location of one thousand, and expand towards lower addresses, in which case new data items are stored at locations ranging below 1000, and the stack pointer is decremented each time a new item is added. When an item is removed from the stack, the stack pointer is incremented.
Stack pointers may point to the origin of a stack or to a limited range of addresses either above or below the origin (depending on the direction in which the stack grows); however, the stack pointer cannot cross the origin of the stack. In other words, if the origin of the stack is at address 1000 and the stack grows downwards (towards addresses 999, 998, and so on), the stack pointer must never be incremented beyond 1000 (to 1001, 1002, etc.). If a pop operation on the stack causes the stack pointer to move past the origin of the stack, a stack underflow occurs. If a push operation causes the stack pointer to increment or decrement beyond the maximum extent of the stack, a stack overflow occurs.
Some environments that rely heavily on stacks may provide additional operations, for example:
- Dup(licate): the top item is popped and pushed again so that an additional copy of the former top item is now on top, with the original below it.
- Peek: the topmost item is popped, but the stack pointer is not changed, and the stack size does not change (meaning that the item remains on the stack). This is also called top operation in many articles.
- Swap or exchange: the two topmost items on the stack exchange places.
- Rotate: the n topmost items are moved on the stack in a rotating fashion. For example, if n=3, items 1, 2, and 3 on the stack are moved to positions 2, 3, and 1 on the stack, respectively. Many variants of this operation are possible, with the most common being called left rotate and right rotate.
Stacks are either visualized growing from the bottom up (like real-world stacks), or, with the top of the stack in a fixed position (see image), a coin holder ([1]) or growing from left to right, so that "topmost" becomes "rightmost". This visualization may be independent of the actual structure of the stack in memory. This means that a right rotate will move the first element to the third position, the second to the first and the third to the second. Here are two equivalent visualisations of this process:
apple banana
banana ==right rotate==> cucumber
cucumber apple
cucumber apple
banana ==left rotate==> cucumber
apple banana
A stack is usually represented in computers by a block of memory cells, with the "bottom" at a fixed location, and the stack pointer holding the address of the current "top" cell in the stack. The top and bottom terminology are used irrespective of whether the stack actually grows towards lower memory addresses or towards higher memory addresses.
Pushing an item on to the stack adjusts the stack pointer by the size of the item (either decrementing or incrementing, depending on the direction in which the stack grows in memory), pointing it to the next cell, and copies the new top item to the stack area. Depending again on the exact implementation, at the end of a push operation, the stack pointer may point to the next unused location in the stack, or it may point to the topmost item in the stack. If the stack points to the current topmost item, the stack pointer will be updated before a new item is pushed onto the stack; if it points to the next available location in the stack, it will be updated after the new item is pushed onto the stack.
Popping the stack is simply the inverse of pushing. The topmost item in the stack is removed and the stack pointer is updated, in the opposite order of that used in the push operation.
III. BUFFER
In computing, a buffer is a region of memory used to temporarily hold data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such as a keyboard) or just before it is sent to an output device (such as a printer). However, a buffer may be used when moving data between processes within a computer. This is comparable to buffers in telecommunication. Buffers can be implemented in either hardware or software, but the vast majority of buffers are implemented in software. Buffers are typically used when there is a difference between the rate at which data is received and the rate at which it can be processed, or in the case that these rates are variable, for example in a printer spooler.
Applications
Buffers are often used in conjunction with I/O to hardware, such as disk drives, sending or receiving data to or from a network, or playing sound on a speaker. A line to a rollercoaster in an amusement park shares many similarities. People who ride the coaster come in at an unknown and often variable pace, but the roller coaster will be able to load people in bursts (as a coaster arrives and is loaded). The line to the ride acts as a buffer: a temporary space where those wishing to ride wait until the ride is available. Buffers are usually used in a FIFO (first in, first out) method, outputting data in the order it came in.
Buffer versus cache
Further information: The difference between buffer and cache
A cache acts often also as a buffer, and vice versa. However, cache operates on the premise that the same data will be read from it multiple times, that written data will soon be read, or that there is a good chance of multiple reads or writes to combine to form a single larger block. Its sole purpose is to reduce accesses to the underlying slower storage. Cache is also usually an abstraction layer that is designed to be invisible.