For many problems, the ability to formulate an efficient algorithm depends on being able to organize the data in an appropriate manner. The term data structure is used to denote a

particular way of organizing data for particular types of operation.

These notes will look at numerous data structures ranging from familiar arrays and lists to more complex structures such as trees, heaps and graphs, and we will see how their choice affects the efficiency of the algorithms based upon them.

Often we want to talk about data structures without having to worry about all the implementation details associated with particular programming languages, or how the data is stored in computer memory.

We can do this by formulating abstract mathematical models of particular classes of data structures or data types which have common features.

These are called abstract data types, and are defined only by the operations that may be performed on them.

Typically, we specify how they are built out of more primitive data types (e.g., integers

or strings), how to extract that data from them, and some basic checks to control the flow of processing in algorithms.

The idea that the implementation details are hidden from the user and protected from outside access is known as encapsulation.

We shall see many examples of abstract data types throughout these notes.

At an even higher level of abstraction are design patterns which describe the design of

algorithms, rather the design of data structures.

These embody and generalize important design concepts that appear repeatedly in many problem contexts. They provide a general structure for algorithms, leaving the details to be added as required for particular problems.

These can speed up the development of algorithms by providing familiar proven algorithm

structures that can be applied straightforwardly to new problems.

We shall see a number of familiar design patterns throughout these notes.

## What is Arrays in Data structures

In computer science, the obvious way to store an ordered collection of items is as an array.

Array items are typically stored in a sequence of computer memory locations, but to discuss them, we need a convenient way to write them down on paper.

We can just write the items in order, separated by commas and enclosed by square brackets. Thus,

[1, 4, 17, 3, 90, 79, 4, 6, 81]

is an example of an array of integers. If we call this array a, we can write it as:

a = [1, 4, 17, 3, 90, 79, 4, 6, 81]

This array a has 9 items, and hence we say that its size is 9. In everyday life, we usually start counting from 1. When we work with arrays in computer science, however, we more often (though not always) start from 0.

Thus, for our array a, its positions are 0, 1, 2, . . . , 7, 8. The element in the 8th position is 81, and we use the notation a[8] to denote this element.

More generally, for any integer i denoting a position, we write a[i] to denote the element in the ith position. This position i is called an index (and the plural is indices). Then, in the above example, a[0] = 1, a[1] = 4, a[2] = 17, and so on.

It is worth noting at this point that the symbol = is quite overloaded. In mathematics,

it stands for equality. In most modern programming languages, = denotes assignment, while equality is expressed by ==.

We will typically use = in its mathematical meaning, unless it is written as part of code or pseudocode. We say that the individual items a[i] in the array a are accessed using their index i, and one can move sequentially through the array by incrementing or decrementing that index, or jump straight to a particular item given its index value.

Algorithms that process data stored as arrays will typically need to visit systematically all the items in the array, and apply appropriate operations on them.

If you like this post, don’t forget to share 🙂

## No Comments

Leave a comment Cancel