Introduction to Computers, Programs, and Java

Some Important Dates in the History of Computers

1206

First known programmable device (Turkish inventor Al-Jazari's water-powered Castle Clock)

1837

English mathematician Charles Babbage describes what he calls the "Analytical Engine", a fully programmable mechanical computer, able to perform calculations automatically. In the years leading up to the Analytical Engine, Babbage had built simpler mechanical calculators (which he called "Difference Engines"), but financial issues with those led him to concieve of this more ambitious machine capable of general-purpose computation. Sadly, he was unable to actually build the device at the time, but the designs included the ability to store and process data with input and output accomplished through punched cards inspired by Jacquard looms. Instructions for the machine to follow (i.e., programs) then took the form of sequences of these cards.

1842

Ada Lovelace (daughter of poet Lord Byron), as part of her work translating Luigi Menabrea's paper on Analytical Engine, writes notes on how to program this device to calculate Bernoulli numbers [needed for a uniform formula of the sum of the first n perfect m powers]. Consequently, she is considered the "first programmer".

1871

Babbage finally builds a piece of his "Analytical Engine -- however, limited finances, the complex mechanical engineering required, and other reasons kept him from ever completely building the rest of this device.

1930's and 1940's
Digital computers were created and used at the dawn of, and during World War II. German civil engineer Konrad Zuse with his "Z1" (a floating-point binary mechanical calculator with limited programmability that read instructions from perforated 35mm film), and his "Z3" (a binary 22-bit floating-point calculator featuring programmability with loops, but without conditional jumps, with memory and a calculation unit using telephone relays largely collected from discarded stock, which became operational in 1941) were among the first. The "Colossus Mark I" in Britain showed up around the same time (1943), being built to break secret codes. Alan Turing's use of probability in cryptanalysis contributed to its design; The ENIAC, an acronym for "Electronic Numerical Integrator and Computer", was also developed during World War II for the US Army to calculate artillery firing tables and was unveiled in 1946. It revolutionized computing with its electronic speed but was massive and consumed a huge about of power. It was "programmed" via patch cables and switches and ultimately used for calculations related to the hydrogen bomb.

1947
The first documented case of a computer malfunction attributed to a literal insect -- a moth found stuck in a relay of the Harvard Mark II computer by US Navy Rear Admiral, mathematician, and computer scientest Grace Hopper (often called the "mother of computing" and her team, who then famously taped the moth into their logbook with the note, "First actual case of bug being found". This popularized the term "debugging" for fixing errors in computer programs. The term "bug" for a defect actually predated this, but the real-life moth cemented its use in computing, thanks to Hopper's lengendary role as a computer pioneer. Grace was the first woman to program a US computer, and her work spanned compiler verification, software development, programming languages, and data processing.

What is a Computer?

A computer is a machine that manipulates data according to a list of instructions, consisting of both hardware and software.

Modern computers generally have the following (hardware) components:

These components talk to one another through various pathways of wires and circuits, each called a "bus". The system bus connects the CPU, memory, and I/O devices and serves as the main highway for data exchange within the system; the address bus tells the CPU where data needs to go or be retrieved from; the data bus carries the actual data being processed between components; and the control bus transmits commands and status signals between the CPU and devices.

CPU (Central Processing Unit)

Memory and Storage

Why does some memory (like DRAM) go away when the power goes off?

In a DRAM (dynamic RAM) memory cell a 0 or a 1 is represented by a paired transistor and capacitor. The capacitor holds the bit of information, while the transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state. Think of a capacitor as a small bucket that stores electrons. To store a 1, the bucket is filled with electrons. To store a 0, it is emptied. Unfortunately, capacitors leak. Without the intervention of the memory controller, within a few milliseconds a capacitor's bucket-full of electrons becomes empty. As such, to store information for any length of time, the computer must (very frequently) recharge all of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the memory and then writes it right back again. This refresh operation happens automatically thousands of times per second.)

How should one envision memory? (Important!)

You can envision memory as a list of positions (identified by different, sequential memory addresses) where data in the form of some fixed number of ones and zeros is stored at each address, as the below diagram suggests. Note, that there is nothing about an individual byte of data that identifies it as a letter, number, or anything else. How we should interpret the ones and zeros at a particular memory address in terms of the type of data those ones and zeros are meant to represent must additionally be stored (someplace else). Consider the addresses 2003 and 2004 below. The data in one is interpreted as the letter "a", while the other is interpreted as the value 17. However, nothing in the memory addresses shown tell us to do that. Without some information elsewhere, we would not be sure how to interpret those ones and zeros!

Memory Address Memory Content Decoded As
. . .
. . .
2000 01001010 character "J"
2001 01100001 character "a"
2002 01110110 character "v"
2003 01100001 character "a"
2004 00010001 number "17"
. . .
. . .

Computer Programs

Programming Languages (and where Java fits in...)

A program's set of instructions are specified using a programming language. There are different types of programming languages. For example:

Some Trivia About Java's History

What makes up the Java Language?

Java involves two primary elements:

Additionally, there is the Java Development Kit (JDK), which programmers use to write Java programs. There are different versions of the JDK:

These editions of the JDK have evolved over time. The first version, JDK 1.02, was released in 1995. The latest version released (as of September, 2025) is JDK 25.

A Minimal Java Program Development Environment

If desired, one can create a java program using only a text editor, and compile and run it using only the command-line programs javac and java, respectively. javac is the Java Compiler and produces class files containing Java byte code. java is the Java Virtual Machine (JVM) and is able to execute the instructions written in Java byte code contained in the class files produced by javac. Compiling and running things from the command-line is perhaps not the most efficient way of doing things, but it will work.

Integrated Development Environments (a.k.a. "IDEs")

The process of writing programs is generally easier if one works within an IDE (Integrated Development Environment). This is a software application that provides comprehensive facilities for software development, such as:

Popular IDEs for Java include:

The Process of Writing a Program

However you plan on writing, compiling, and running your programs -- there is a certain order to the process, as suggested below:

  1. Create/Edit Source Code

    You can use a text editor, like "notepad" or "edit" in Windows, or "vi", "vim", or "gedit" in Unix, to type your Java source code. Several of these editors are designed to run from a command line interface (CLI) through a shell window. Each program file should have a "*.java" extension.

  2. Compile Source Code

    When the program is written, it is then compiled with the Java compiler, "javac <my sourcefile>", which is included in the JDK.

  3. Fix Syntax Errors

    If there were errors in your source files that keep the compiler from compiling your program (these type of errors are called "syntax errors"), re-edit the source files to eliminate them and then recompile. If there were no such errors, then a Java bytecode file will be produced (i.e., a file with a "*.class" extension).

  4. Run Bytecode

    The java bytecode file is a set of instructions that can be executed by the Java Virtual Machine (JVM). To run the JVM, one can type "java <my classfile>" from the command line. This runs the program.

    The *.class files are not meant to be directly read by humans. If you really want to know what is inside them, you can use "javap -c -s -verbose <classfile>" from the command line to translate a class file into bytecode you can actually read (which is printed to "stdout" by default). This is called "disassembling" the class file. In this class, you should never have to do this - just know that it can be done. Expert programmers disassemble class files to discover how they might tweak the associated programs into running more efficiently. Note: If you do execute the above command, leave off the .class extension just like you do when running "java".

  5. Fix Runtime and Logical Errors

    There might still be errors present in your program, despite the fact that it compiled. Sometimes, a program crashes in the middle of running. This is called a "runtime error". Sometimes the program runs, but doesn't do what you want it to do. This is called a "logical" error. If you encounter a runtime or logical error, just like before - go back to the source files, fix the error, recompile, and run it again. Repeat this process until all of the errors are gone.