ASCII Bytesart 1

How does string data get to a screen

When I first looked into the concept of punching a key on my computer and a character magically appearing on the screen seemed easy enough. The OS just looks up the ASCII code in a table and sends the corresponding letter, number, or punctuation to the GPU to put on the screen. This is a naive description of what really goes on. So where is this table stored? How does the OS deal with row and column look-ups anyway? What about fonts? Do all OSs deal with characters the same?

The ASCII table is a encoding scheme not a decoding device. What you see on the screen is a text rendering after a lot of processing and that's just for one character. How about a bunch of text? For that matter, how is string data or a whole document worth of text stored in a file? Is it ASCII bytes, glyphs, SVGs? This is where the road forks. It depends on the OSs or third party libraries or several conversion choices to render fonts and language peculiarities. This is why the DOCTYPE meta data on an HTML page needs to show the text encryption tag. We in the English speaking world don't pay attention because the default is UTF-8 ASCII. Most other places have to know what to put in this tag.

Like most major categories in computer science, there are a lot of different approaches and revisions that supersede what came before. I will just describe generalities but attempt to go beyond the thought that pushing a ASCII 65dec or the byte 01000001 into a text buffer will get an "A" printed on your screen. It's a start but a lot happens before the pixel layout is generated to do exactly that.

Here's the quick and dirty description of what's going on:

Fetching the data: The CPU reads the byte data from the computer's memory or storage device. The data could be part of a file, a data buffer, or input from a user's keyboard.

Character Encoding: The data is assumed to be encoded using a specific character encoding scheme, such as ASCII or UTF-8. ASCII uses 7 bits per character, while UTF-8 can use up to 8, 16, 24, or 32 bits depending on the character being represented.

Framebuffer: The GPU utilizes a special region in the computer's memory known as the framebuffer. The framebuffer is a memory buffer that holds the pixel data representing what is currently displayed on the screen. Each pixel is typically represented by a certain number of bits, depending on the color depth and resolution of the screen.

Graphics Libraries and APIs: To interact with the GPU, the operating system and applications use graphics libraries and APIs (Application Programming Interfaces) such as DirectX or OpenGL. These libraries provide a set of functions that allow software to communicate with the GPU and issue commands for rendering graphics and text.

Rendering Pipeline: When an application or the operating system wants to display something on the screen, it sends rendering commands to the GPU through the graphics API. The rendering commands define how the graphics or text should be displayed, including information about geometry (shapes, positions), textures, colors, intensity, and other visual attributes.

Vertex Processing: The GPU's vertex processing stage handles geometric transformations, such as translating, rotating, and scaling the shapes. It processes the vertices (corner points) of the shapes to create the appropriate transformation matrices.

Rasterization: In this step, the GPU converts the geometric shapes into individual pixels. Rasterization involves determining which pixels within the shape boundaries should be filled in and which should be left empty.

Fragment Processing: The GPU processes each pixel (fragment) produced during rasterization. It applies textures, shading, lighting, intensity, and other visual effects to determine the final color and appearance of each pixel.

Output to Framebuffer: After fragment processing, the GPU writes the final pixel data back to the framebuffer. The framebuffer now holds the updated information about what should be displayed on the screen. So these APIs have become defacto text, graphic, and image mediaries for all computers.

Display Refresh: The GPU cooperates with the display controller, a hardware component responsible for sending the pixel data from the framebuffer to the physical display. The display controller refreshes the screen multiple times per second (typically 60Hz or higher), and during each refresh cycle, it reads the pixel data from the framebuffer and updates the physical display accordingly. This makes sense because each monitor or TV screen hardware has its limitations.

Okay, not quick and dirty... there's more!

Graphics APIsart 2

Glyph handlers

These graphic APIs are now standard and essential partners in getting tons of data on screens. Windows uses DirectX, Mac uses Metal, and Vulkan is an example of an open source app. Glyphs are the visual representation of characters in a particular font. Here's their story:

Font Selection: To render ASCII characters, a font needs to be selected. A font is a collection of glyphs associated with specific code points. Fonts can be categorized into two main types: bitmap fonts and vector fonts.

Bitmap Fonts: Bitmap fonts are raster-based and consist of a grid of pixels for each character. These fonts are resolution-dependent and can appear pixelated if displayed at a different size or resolution than their intended design. Bitmap fonts were commonly used in early computer systems.

Vector Fonts: Vector fonts use mathematical equations to define the shapes of characters. They are resolution-independent and can be scaled to any size without loss of quality. Most modern font formats, like TrueType (.ttf) and OpenType (.otf), use vector-based representations.

Glyph Rendering: When a character needs to be displayed on a screen or printed, the operating system or application accesses the appropriate font file and retrieves the corresponding glyph for the character's code point.

For bitmap fonts, the process is straightforward. The bitmap representing the glyph is directly displayed on the screen, with each pixel representing a part of the character's shape.

For vector fonts, the rendering process involves more complex steps:

The rendered glyph is then displayed on the screen or printed, allowing users to see the character as part of the text or graphics being presented.

Character glyphs can be thought of as SVG (Scalable Vector Graphics) images, especially in the context of modern font formats like TrueType and OpenType.

SVG is a vector graphics format that uses XML-based markup to define the shapes and paths of graphical elements. It is resolution-independent and can be scaled to any size without losing quality. Each character glyph in a font is essentially a vector graphic that defines the outline of the character using mathematical curves (such as Bézier curves) and other graphical elements.

Here's why character glyphs can be compared to SVG images:

Vector Representation: Both character glyphs and SVG images are represented as vectors, using mathematical equations to define the shapes and curves of the graphical elements. Scalability: As vector graphics, both character glyphs and SVG images are scalable to any size without loss of quality. This makes them ideal for use in various display contexts, from small font sizes to large headings or graphics. Smooth Curves: Both glyphs and SVG images can achieve smooth curves and precise shapes, allowing for accurate representation and high-quality rendering. XML-based Markup (for OpenType with SVG): The latest version of the OpenType font format, OpenType with SVG (also known as OpenType SVG or color fonts), allows for embedding actual SVG images within the font. In this case, each character glyph can be represented as an SVG image directly.

It's worth noting that while character glyphs in modern fonts can be thought of as SVG images, the underlying font format (e.g., TrueType or OpenType) may use a different representation for the glyph data. Traditional TrueType and OpenType fonts use cubic Bézier curves and other methods to define glyph outlines, while OpenType with SVG uses actual SVG images.

Overall, the similarity in vector representation and scalability between character glyphs and SVG images is one of the reasons why modern font formats can support complex and visually rich text rendering, making them suitable for various typography and graphic design applications.

So is this the end of the story - no. As you can see, there is a lot to getting an 8-bit ASCII code onto a screen. There is also all the OS dependent features like font types, font sizes, keyboard entry events, etc. Not to mention the display itself with refresh rates and all sorts of display options and adjustments. All this misery to just say, "Hello World!" Who knew?