Why do you use a programming language? The simple answer is to code. However, this answer does not provide insight into how programming languages work, nor which one you should use for a given task. Understanding the timeline of a technology is critical to using it effectively. Therefore, learning about the history of programming languages is important.
A programming language functions as an interface between humans and machines. In other words, humans use programming languages to tell man-made machines how to run. The alternative to using a programming language is not using software at all. Yet this is not productive for mathematical operations. Such that the computer was born with virtual programs coming not long after.
How can you perform math in an automated manner? The first step involves electricity. The second step involves a switch: More so the idea of a light switch, which can be flipped on to represent a 1, and flipped off to represent a 0. In order to perform mathematical operations, you must store symbols such as 2 and 4 (as 2 + 2 = 4 ). So how can you represent a number such as 4 with a light switch?
Instead of requiring 10 light switches to represent a single value (i.e 4) in a base-10 numeral system, a single light switch — which can be on (1) or off (0) — is used to represent a single value in a base-2 (binary) number system. In other words, computers store numbers using 1s and 0s. In a base-10 system, the number 10 represents — from right-to-left — the sum of 0 (0 * 10 ^ 0 ) and 10 (1 * 10 ^ 1 ): The result is 10. In a base-2 system, the number 10 represents — from right-to-left — the sum of 0(0 * 2 ^ 0) and 2(1 * 2 ^ 1): The result is 2.
The prefix “bi” means “two”. Therefore, binary represents two numbers. In a computer, 1 and 0 are the two numbers used in a binary number system. Instead of “light switches”, micro transistors are used to switch electrical signals within the computer. Each micro transistor represents a 1 or 0 which represents a bit. However, a single bit on a computer doesn’t mean much without context.
How can you represent words on a computer in binary? Numerous character setstandards were created in order to represent words in binary (i.e ASCII, ANSI, EBCDIC, etc). The significance being that these character sets are human created standards which provide a specification for other machines to use. These standards specified how many bits were required for a character (i.e w in word). As a result, 8-bit computing became standardized in computer processing units. An 8-bit number — which contains 8 bits — is called a byte.
The introduction of 8-bit machines would evolve into 16-bit, 32-bit, and 64-bit machines. These machines would lead to the representation of numbers using octal and hexadecimal number systems. For more information on the history of computers, read Computer History (Computers).
Introducing The Compiler
Machine code represents the language of computers which use bits (1s and 0s). Of course, writing 1s and 0s to create programs is tedious and complex for humans. As a result, computer programs called compilers were created in order to convert human-readable code into machine code. Once compiled, a program is interpreted using an interpreter. As an example, the Computer Processing Unit (CPU) is the final interpreter of machine code.
From this point onwards, a pattern emerged: Programming languages were created in order to make it easier for humans to read and write code. Rather than compile to machine code, certain languages (i.e C++) would compile to other languages (i.e Assembly) which compiles to machine code. This led to the separation of languages into levels such as high-level programming languages and low-level programming languages.
The “History of Programming Languages” and “Timeline of Programming Languages” documents showcase various programming languages alongside their objectives, predecessors, and successors. Knowledge of these tools will assist you in creating more performant programs in a maintainable manner. With that being said, modern programming languages are typically broken down into specific categories to highlight their use cases.
Interpreted vs. Compiled
Understanding the meaning of compiled and interpreted (in computing) highlights the difference between compiled and interpreted programming languages. As a reminder, a compiler compiles code from one form (i.e human readable code) to another (i.e machine code). A compiled language implies that the language will NOT use an interpreter at runtime, which is typically beneficial for performance. In contrast, an interpreted language implies that the language will be interpreted at runtime, which is typically beneficial for code iteration (programmer productivity).
Unmanaged vs. Managed
Programming languages may be referred to by the way they handle computer memory. Processing bits on a Computer Processing Unit (CPU) is fast, but what if you need to store information (i.e variables)?Computer memory and other computer storage options solve this problem at the cost of processing speed.
Random Access Memory (RAM) is built for high-speed access to a physical location of the computer called a memory address. Each memory address contains binary or decimal numbers which represent data or instructions. So a program is able to store and retrieve data from memory by writing and reading from a a memory address. However, RAM is erased when a computer shuts down.
Unexpected operations occur when a computer mishandles memory. So certain languages do NOT require the programmer to manage memory manually. Instead, an automated form of memory management — such as a Garbage Collector — is provided in the runtime. So a managed memoryprogramming language provides the programmer with an automated form of memory management, while an unmanaged memory programming language requires the programmer to manually manage the computer’s memory.
Solid State Drives, Hard Disk Drives, and other direct-access data storage solutions are built for high-capacity, long-term storage that persists data beyond the power state of a computer (on/off).
Typed vs. Untyped
In order to compile a program, the language must be able to ensure that the code will run correctly. It’s common for modern languages to use data types to check the correctness of a program. Data types serve as an alternative to managing information (i.e numbers and words) with 1s and 0s. For more information on data types, watch What Are Data Types?
A strongly typed language uses types that are defined explicitly (i.e varin int var = 5), while a weakly typed language infers them when a variable is assigned. A statically typed language performs type checks with the compiler, while a dynamically typed language performs type checks with an interpreter. A nominally typed languageperforms type checksusing a type’s name, while a structurally typed language performs type checks using a type’s underlying structure.
An example of a structural type check is provided while comparing functions in the Go programming language: clap(int a) == comment(int b) since both functions are structurally defined as func(int).
Programming paradigms provide mental models that assist programmers in solving programming problems. Certain programming languages may subscribe to programming paradigms such as Object Oriented Programming or Functional Programming. The importance of these paradigms are debatable. The significance of these paradigms is that they may influence how a typical program is created with a given programming language.