What is a Computer?

Computer

  • It is an electronic device used to store, handle, share, process and calculate data very quickly.
  • It can be also used to control other machines.
  • It is historically used to refer to a person who does mathematical computations.

Short History

  • For most of history, humans have used computing devices to aid computation for thousands of years.
    • One of the most known examples of computing devices were:
      • the abacus developed by the Babylonians in around 2400 BCE (over 4000 years ago)
        • it was initially used to perform basic arithmetic tasks like addition and subtraction
      • the planisphere developed by Abū Rayhān al-Bīrūnī in the 11th century
        • it was used to show the visible stars for any time and date, which was used for navigation before compasses and maps
      • the slide rule developed in the 1600s by English clergyman William Oughtred.
        • it was a hand-operated calculator used to perform operations from basic arithmetic like multiplication and division, to logarithms and exponential, to trigonometric functions.
  • It was not until the 19th century when Charles Babbage made the first concept of a programmable computer, in addition to conceptualizing and inventing the first mechanical computer.
    • He was considered as the father of the computer.
  • Over the course of history, two types of computers have emerged: the analog and the digital computer.

Analog Computer

  • It uses physical quantities, such as length or distance, to represent information.
  • Some examples of analog computers are:
    • the mechanical watch,
    • the slide rule,
    • differential analyzer.
  • One of the earliest analog computers invented was the Antikythera mechanism.
    • It was discovered in the Greek island of Antikythera.
    • It was believed to have been used by the Ancient Greeks to calculate positions of planets and astronomical bodies in 100 BCE (over 2200 years ago.)

Digital Computer

  • In contrast to the analog computer, information is represented as a long sequence of s and s, often called binary digits, or bits.
  • The first fully electric digital computer was developed by German engineer Konrad Zuse, the inventor of the modern computer.
    • It was initially made using vacuum tube technology, the very basis of the first generation of computers.
    • One computer with this technology is the ENIAC (Electronic Numerical Integrator and Computer)
      • It was used in the Second World War to calculate ballistic trajectories for the United States Army.
  • However, the birth of the transistor gave rise to the next generation of computers.

The Second Generation Computer

  • The invention of the transistor quickly replaced vacuum tube technology in computer design.
    • Its quick adaptation was because of several advantages of the transistor:
      • Transistors were significantly smaller and more compact than vacuum tubes.
      • They also require less power than vacuum tubes, therefore less heat.
      • They also had a longer, indefinite service life than vacuum tubes.
      • Computers built off from transistors can store more logic circuits in a compact space.
  • However, the first junction transistors were bulky and difficult to manufacture and mass produce.
    • The first transistors that was compact and mass produced for a variety of uses was the MOS transistor or also known as (metal-oxide-silicon field effect transistor).
    • The invention of the MOSFET catalyzed the microcomputer revolution.

Second Generation Programming: Assembly

Assembly Language

  • It is also known as ASM or simply assembly.
  • It is a group of programming languages that were prevalent on second generation computers.
  • It is a low-level programming language.
    • This means that this programming language resembles very closely to machine code rather than the human natural language.
    • In short, these programming languages serve as a primary interface that can send instructions to the machine.
  • Some commonly used modern examples of assembly languages still used today are ARM and x86.

History

  • Before assembly was mainstream, instructions for first generation computers were programmed using machine code.
    • This machine code uses binary representations of data, which is very hard for humans to understand and less accessible for developers and programmers.
    • Because of this, American computer scientist and Navy officer Grace Hopper developed the FLOW-MATIC programming language
      • This programming language uses English mnemonics, or keywords that are easy to understand or memorize than binary code.
  • q second generation computers became more prevalent in the late 1950s, so does the adaptation of assembly as a programming language.
    • In addition, assembly also evolved to make use of even more complex instructions that transistor computers offer but the first generation computers cannot handle.
  • The role of assembly is vital for second generation computers as:
    • It made programming much more efficient to developers.
    • It made troubleshooting (detecting errors) easier.
    • It provided programmers direct control over hardware operations, which made second generation computers more efficient
    • They paved the way for support for high-level programming languages.
    • It also facilitated batch processing and multiprocessing, which improved overall how computers use resources.

How Assembly Works

  • A programmer writes assembly code to a text file.

  • The text file containing assembly code is then sent to an assembler.

    • The assembler then translates the assembly language into machine code in a process called assembling.
      • The time it takes for the assembler to process instructions is called assembly time.
    • During the assembling process:
      • The machine parses or reads the code and looks for keywords, operands, etc.
      • Then the assembler translates or converts the keywords into binary instructions that the machine can then process.
    • After assembling, the CPU then processes or executes the resulting machine code.
  • Here is an example of assembly code for the x86 architecture.

section .data          ; Data section for defining variables
    num1 db 10        ; Define a byte variable num1 with value 10
    num2 db 20        ; Define a byte variable num2 with value 20
    result db 0       ; Define a byte variable to store the result
    msg_pos db 'Result is positive!', 0 ; Null-terminated string
    msg_neg db 'Result is negative!', 0 ; Null-terminated string

section .text          ; Code section
    global _start      ; Declare the entry point for the program

_start:               ; Start of the program
    ; Load numbers into registers
    mov al, [num1]    ; Move the value of num1 into AL register
    mov bl, [num2]    ; Move the value of num2 into BL register

    ; Add the numbers
    add al, bl        ; Add the value in BL to AL (AL now holds the sum)

    ; Store the result
    mov [result], al  ; Store the result in the result variable

    ; Check if the result is positive
    cmp al, 0         ; Compare AL with 0
    jg print_positive  ; Jump to print_positive if AL is greater than 0

    ; If the result is not positive
    mov edx, msg_neg  ; Move the address of msg_neg into EDX
    jmp print_message  ; Jump to print_message

print_positive:
    mov edx, msg_pos  ; Move the address of msg_pos into EDX

print_message:
    ; Print the message (pseudo code, actual printing requires system calls)
    ; The following would be an example if using syscalls to print (Linux)
    mov eax, 4        ; Syscall number for sys_write
    mov ebx, 1        ; File descriptor 1 is stdout
    mov ecx, edx      ; Pointer to the message to print
    mov edx, 23       ; Length of the message (adjust based on actual message length)
    int 0x80          ; Call the kernel to execute the syscall

    ; Exit the program
    mov eax, 1        ; Syscall number for sys_exit
    xor ebx, ebx      ; Return 0 status
    int 0x80          ; Call the kernel to execute the syscall

Social and Economic Impact

  • Computer Economics
    • As first generation computers were large, bulky, and hard to maintain, there was little potential for a bigger market for them due to their additional complexity.
    • However, this changed with the emergence of the second-generation, transistorized computers, which were much smaller.
      • Companies like the IBM tried selling these computers to a larger market outside of research and academia, which stimulated the demand and competition for them.
      • This increased the sales of computers from 7 billion in 1969.
  • Increased Availability and Accessibility
    • As companies compete to sell their own computer models to more people, this increased the accessibility of the computer to a wide range of institutions outside of research, such as the business, government, and education sectors.