Click here to Skip to main content
15,901,368 members
Articles / General Programming / Algorithms

PsCal - Create Personalized PDF Calendars

Rate me:
Please Sign up or sign in to vote.
5.00/5 (2 votes)
7 May 2024CPOL20 min read 1.6K   113   4   3
This article describes a set of batch, AWK, and PostScript files that together allow you to create personalized, 12-page PDF calendars for some year.

Introduction

This article briefly describes how you can create calendars for various years in the form of multi-page PDF files. An optional, simple text file allows you to personalize any calendars you create. The calendars are created by a set of files that I refer to as PsCal. PsCal generates its calendars using two programming languages that have been around for decades: AWK and PostScript. You can check out the original version of this article here. In that article I briefly discuss the AWK and PostScript languages, as well as the history of PsCal's development.

In this update, following two brief sections on how to generate a calendar using PsCal and PsCal's documentation, I discuss some of the changes I've made in the past year, concentrating on localization (L10n, which is a cool way to c9e), and an introduction to a few new PsCal features.

Some of the discussion is specific to PostScript, and it admittedly is of little practical use to most, if not all, of you. However, I find it somewhat interesting as a learning exercise in design and specification, if in no other fashion.

Further on you can read a bit about UTF-16 and UTF-8 files. Unlike the PostScript-specific discussion, this information may be generally useful to anyone who works with Unicode-formatted files (any programmer writing code that reads an input file?).

Here is an example of a month from a PsCal calendar. As this example shows, one of the new features I recently added is support for a few languages other than English:

Image 1

Creating a Calendar

To create a calendar for the current year, all you need do is execute the MakeCalendar batch file with no parameters. By default it will create a 12-month calendar for the current year in a file named <current year>.pdf.

Also, by default, if there is a file named events.pse in the current directory, the events in that file will be placed on the calendar. This file is used whenever it is present in the current directory, unless you supply the name of a different file on the command line. Usage for MakeCalendar.bat is:

MakeCalendar [OPTIONS]

OPTIONS can be one or more of:

  -y=<Year>       The year for which the calendar is to be generated. DEFAULT:
                  the current year.

  -m=<MonthList>  A list of months to be included in the calendar.  DEFAULT:
                  1-12.  Months are numbered from 1 (Jan) through 12 (Dec).
                  Specify a range of months by placing a dash (-) between two
                  months. Use a comma (,) to separate individual months and
                  ranges of months.

  -e=<EventsFile> The name of a file containing events specific to you, such as
                  birthdays, anniversaries, paydays, etc. DEFAULT: a file named
                  "events.pse" in the current directory, if it exists.  This
                  file can also contain data that can modify the appearance of
                  the calendar, such as headers and footers, fonts, and various
                  graphics effects.

  -n=<BaseName>   The base name of two output files that are created. One is
                  a PostScript file (which is deleted by default), and one is a
                  PDF file. Each of them will contain the calendar for the year
                  in (obviously) different formats.  DEFAULT: the current (or
                  given) year, producing <year>.ps and <year>.pdf.

  -edit           Edit an events file to reflect PsCal 4.0 improvements to bitmap
                  and background image handling.

PsCal Documentation

I have included a readme file (_readme.txt) in PsCal's top-level directory. That is obviously a good starting point if you want to experiment with PsCal. In that top-level directory is a sub-directory named doc. There you will find three PDF files:

EventFileFormat.pdf  Detailed information on PsCal's event file.
awkinfo.pdf          A substantial overview of AWK.
psinfo.ps            A terse overview of PostScript.

The AWK and PostScript files may help you better understand some of the later parts of this article, as well as how PsCal performs its task, but these files are by no means required reading.

The event file document, however, is something you should at least skim over if you decide to try out making your own calendars. It is quite detailed, showing all the things you can do in an event file to make your calendars reflect your unique sense of style.

Support for Languages Other Than English

The French calendar image above was made possible due to a question I was asked by codeProject member FrogCoder5 about PsCal's other-than-English capability. I had to admit to being completely in the dark about how to display anything in other than ASCII characters on a calendar.

Side note: I put almost all of the AWK code related to language support into a separate file: gawk\LanguageParts.awk. If you want to improve on any of the supported languages (English, French, German, Italian, Spanish, and Swedish) or want to add a language that is not supported currently, you should read the comment block at the beginning of that file. It contains valuable information about language support as well as controlling the holidays that are placed on a calendar—for better or worse the holidays are tied to the choice of language.

Baby Steps

One of the first things I needed as I started down the path to supporting languages other than English was a way to experiment with some non-English characters. At some point I was messing with Wikipedia or Google Translate and ended copying some text that contained the U-umlaut character (Ü). My recollection is that it showed up properly in Notepad++ when I pasted it into some file.

Now I had a way to copy various characters around in my user event file that were not ASCII characters. The question I had next was which non-ASCII characters could I actually display with a given PostScript font? I soon learned that the PostScript text rendering capabilities allow you to select 1 of 256 possible character images to display at any one time. These images are often referred to as glyphs.

PostScript Font Encoding Vectors

A key part of rendering glyphs on an output device (a monitor or a piece of actual paper!) is an entity called an encoding vector. Every PostScript font has one of these, which is simply an array of 256 names⁠—⁠names of PostScript procedures that draw one of 256 possible character images, which I refer to from here on as glyphs.

You change the glyphs that are rendered by modifying some or all of the procedure names contained in a font's encoding vector. And, fortunately, it turns out that most fonts contain glyph drawing procedures for characters that are not, by default, mapped to one of the 256 encoding vector entries.

PostScript's Default Encoding

A PostScript string like (Abc) is treated by the show operator (responsible for rendering strings on the output device) as a series of numbers. By default, the string (Abc) is interpreted as 0x41, 0x62, 0x63. These three numbers are then used as indexes into the font's encoding vector⁠—⁠they select three PostScript procedures defined by the currently selected font. These procedures are used to render three glyphs on the output device.

You probably recognize that the hex numbers above are the ASCII codes for "A", "b", and "c". PostScript uses a default encoding vector whose first 128 entries are identical to ASCII. Many spots in the upper 128 entries, however, are undefined. Fortunately, PostScript's creators made it easy to change the glyphs that are drawn for any of the 256 possible procedure names, so those upper 128 entries need not be wasted.

ISO Latin 1 Encoding (ISO/IEC_8859-1)

It turns out that PostScript defines an encoding vector that maps the ISO Latin 1 character set to the 256 character positions. Not surprisingly, this vector's name is /⁠ISOLatin1Encoding. The first 128 entries in this vector are the same as the default character set: the ASCII characters in upper and lower case, the digits, and some punctuation and control characters.

Fortunately, the glyphs it associates with character positions 128 (0x80) to 255 (0xFF) differ significantly from PostScript's default encoding vector. This link shows the characters it maps and where (which of the 256 possible positions).

If you look at that link, you will see quite a few characters that are used in western European languages. So, one can make these characters available for use in a PostScript program by re-encoding a font with the /⁠ISOLatin1Encoding vector. Fortunately for me, the PostScript Blue Book contains examples of procedures that re-encode an entire font and that re-encode a subset of a font's glyphs.

Re-encoding PsCal Fonts

In the PsCal PostScript code, I had to add a function to re-encode each font referenced by a calendar to use the ISO Latin 1 encoding vector. The hardest part of that process for me was manipulating font names.

Existing PsCal code simply used chosen fonts by name. For example, /⁠Palatino⁠-⁠Roman. I was unable to figure out how to take a name like that and modify it in any way. I ended up having to change pscal.awk to generate strings for the font definitions instead of names, like this:

(Palatino-Roman) Win1252Recode findfont _FS 3 get scalefont

The line above re-encodes the Palatino⁠-⁠Roman font and scales it to the desired size. The Win1252Recode procedure prepends an underscore to the (Palatino⁠-⁠Roman) string and turns it into a name to be used for the newly created font, and then it re-encodes the new font. So /⁠_Palatino⁠-⁠Roman is a newly created font that uses the Palatino Roman font face and can draw various glyphs that are not accessible by default.

Windows⁠-⁠1252 Encoding

You may be surprised by the name Win1252Recode above. From the discussion so far, I would think that you might expect to see something more along these lines:

(Palatino-Roman) IsoLatin1Recode findfont _FS 3 get scalefont

Congratulations if you did expect something like that; you are clearly paying very close attention!

It turns out that Microsoft, long, long ago, defined a character mapping of their own called Windows⁠-⁠1252. And it seems, in my case at least, that Windows uses this mapping by default. You can see the character mapping this scheme uses here.

Windows⁠-⁠1252 encoding differs from ISO⁠/⁠IEC_8859-1 in most character positions between 0x80 and 0x9F⁠—⁠ISO Latin 1 does not define any characters in that range. So, Windows⁠-⁠1252 encoding adds 27 characters such as 'Euro', 'florin', 'Scaron', and 'scaron' in that range.

The Windows⁠-⁠1252 encoding scheme is the one used by PsCal. Any fonts used to generate a calendar are re-encoded to reflect this mapping of character positions to glyph drawing procedures. So, a calendar can include any character that is part of this encoding scheme.

Making the Change to Windows⁠-⁠1252 Encoding

There was one small problem I had to solve to complete the Windows⁠-⁠1252 coding effort. If you look at the Windows⁠-⁠1252 code page layout you will see that the characters that are mapped in slots 0x80 through 0x9F are shown with their Unicode code point values. But what I needed in order to recode these characters in some PostScript font encoding vector was the names of the font's procedures that render these characters. Luckily, I was able to find the necessary information here.

On that web page you can search for code point values to find the glyph drawing procedure names assigned by Adobe. Armed with this information, I was able to create an array named /⁠WinModificationsArray that you can find defined in file PsStart.txt in PsCal's postscript directory. It looks like:

% Define an encoding array of changes Win1252 encoding makes to the ISO encoding.
/WinModificationsArray [
% ===== =================    ============   ==========   =========
% Index Name                 Octal escape   code point   Character
% ===== =================    ============   ==========   =========
  16#80 /Euro              %     \200         U+20AC         €
  16#82 /quotesinglbase    %     \202         U+201A         ‚
  16#83 /florin            %     \203         U+0192         ƒ
   ...
  16#9E /zcaron            %     \236         U+017E         ž
  16#9F /Ydieresis         %     \237         U+0178         Ÿ
] def

The above array holds pairs of entries. The first item in a pair is the hexadecimal index into the encoding array, and the second item is the name of the PostScript procedure that renders the glyph on the output device. The comments to the right of each pair show the octal escape that can appear in a PostScript string to select the character, the Unicode code point of the character, and a representation of the glyph that the PostScript procedure will render.

The font reencoding process involves making a copy of PostScript's built-in /⁠ISOLatin1Encoding vector with the name /⁠Win1252Encoding, then applying the changes embodied in the /⁠WinModificationsArray to that vector. Finally, whenever a font is declared for use in a calendar, it is re-encoded using this /⁠Win1252Encoding vector.

So, in this way, most, if not all, characters used in English and many Western European languages can be rendered on the PostScript calendars created by PsCal.

Support for UTF-8 and UTF-16 Event Files

In order to support the Windows⁠-⁠1252 encoding, I had to clean up PsCal's handling of UTF-8 and UTF-16 formatted event files. Next I briefly describe some of the changes I had to make, as well as some possibly useful general information about Unicode files.

UTF-16 Input

I have long known that manually processing a UTF-16 file with GAWK is possible, but would be a lot of work. So, many years ago I wrote a library function named LibOpenPossibleUtf16File() that detected UTF-16LE and UTF-16BE files and made a new, temporary file⁠—⁠an ASCII copy of the UTF-16 file using all the file's even or odd bytes (depending on its LE or BE status). The other (normally 0x00) bytes of the file were simply ignored. If the given file was not encoded as UTF-16, then it simply opened the given file. In either case, it returned the name of the file that it opened (the original or a copy).

One important thing I did not know about UTF-16 files is that they consist of 2-byte Unicode code points. That is, an "A" in a UTF-16 file is represented by 0x0041, which is the Unicode code point for that character. So, by dropping the upper, zero byte, you are left with the ASCII code for "A".

Now, if PsCal only supported ISO Latin 1 characters, the original version of the LibOpenPossibleUtf16File() function would have worked as written. That is because the Unicode code points of all of the characters in the ISO Latin 1 encoding are 0xFF or below⁠—⁠the upper bytes of the code points of all the ISO Latin 1 characters are 0x00. So, disregarding the upper 0x00 byte causes no harm.

However, the characters that Windows⁠-⁠1252 encoding adds have code points of 0x0100 or higher, so their upper bytes in a UTF-16 file are non-zero. For example, the zcaron character, ž, has a Unicode code point of 0x017E. This meant I had to start paying attention to the upper bytes of the UTF-16 files when they were non-zero to see if they were the first byte of a character's code point that is part of the Windows⁠-⁠1252 encoding.

The changes I made when one of these characters is encountered, is to emit the character's index within the Windows⁠-⁠1252 encoding scheme into the file copy. For the zcaron character, for example, it puts a 0x9E into the output file "copy". Any characters with code points greater than 0xFF that are not in Windows⁠-⁠1252 are replaced by a space character.

UTF-8 Input

I was aware that GAWK supports UTF-8 files, but I was not aware of what that support encompassed. What I believe GAWK's support to be is quite simple—as far as I can tell it simply treats each byte of a UTF-8 file as a separate character.

I was also unaware that UTF-8 files, like UTF-16 files, consist of Unicode code points and not ASCII characters. And, unlike UTF-16 files, the encoding of the code points can be 1, 2, 3, or 4 bytes in length.

In a UTF-8 file an "A" whose code point is 0x0041 is represented by a single byte of 0x41. In fact, code points 0-127 were defined to match the ASCII character set, and they are all encoded as a single byte in the file. That's why people sometimes confuse ASCII and UTF-8 files as somehow being the same, but they most assuredly are not!

Any byte of a UTF-8 file having its high bit set is part of a multi-byte encoding of some character's Unicode code point. What I needed to do with UTF-8 files was similar to what I had done with UTF-16 files. The big difference being the conversion from a multi-byte code of varying lengths into the proper Windows⁠-⁠1252 character index.

If you are interested in how I did this, you can check out the AWK function LibConvertUtf8ToWin1252() below. This function takes a line of UTF-8 text as input and returns the line with any encoded Windows⁠-⁠1252 characters replaced by their Windows⁠-⁠1252 character index. Any encoded characters in the text that are not part of the Windows⁠-⁠1252 character set are replaced by a space character.

AWK
################################################################################
## If there are any UTF-8 encoded Unicode code point characters embedded in the
## input text and they are part of the Win1252 encoding scheme (including the
## ISO Latin 1 characters), replace them in the text by their Win1252/ISO Latin
## 1 character index.
##
## INPUT:
## Text    The UTF-8 encoded text to convert.
##
## RETURNS: Text with valid code points replaced by their character positions in
##          the Win1252/ISO scheme. Invalid code points (i.e. those not included
##          in the Win1252/ISO scheme) are replaced by a space.
################################################################################
function LibConvertUtf8ToWin1252(Text,                              # Parameter
            NewText,n,TextChars,i,Utf8Code)                         # Local vars
{
    NewText = ""                                 # Text to be returned.
    n = split(Text,TextChars,"")                 # Text --> array of n characters.
    for (i = 1; i <= n; i++) {                   # For each character in the array...
        Utf8Code = 0
        while (LibOrd(TextChars[i]) >= 0x80) {   # Over 0x7F is part of an encoding.
            # Collect bytes of this UTF-8 encoded Unicode code point.
            Utf8Code = (Utf8Code * 0x100) + LibOrd(TextChars[i++])
        }
        if (Utf8Code == 0) {
            NewText = (NewText TextChars[i])     # Current character is ASCII.
        }
        else {
            --i
            if (Utf8Code in gConvLibUtf8ToWin1252) {
                # Convert Windows-1252-specific characters.
                NewText = (NewText gConvLibUtf8ToWin1252[Utf8Code])
            }
            else if (Utf8Code >= 0xC2A0 && Utf8Code <= 0xC2BF) {
                # Convert characters in ISO Latin 1.
                NewText = (NewText sprintf("%c",and(Utf8Code,0xFF)))
            }
            else if (Utf8Code >= 0xC380 && Utf8Code <= 0xC3BF) {
                # Convert more characters in ISO Latin 1.
                NewText = (NewText sprintf("%c",0x40 + and(Utf8Code,0xFF)))
            }
            else if (Utf8Code <= 0xFF) {
                # This is a character that has already been converted into its
                # proper Win1252 character index.
                NewText = (NewText sprintf("%c",Utf8Code))
            }
            else {
                # The encoded character is NOT in the Win1252 layout.
                NewText = (NewText " ")
            }
        }
    }
    return NewText
}

One AWK-specific part of the above code that may not be at all clear is:

AWK
if (Utf8Code in gConvLibUtf8ToWin1252) {
    # Convert Unicode code points existing in Windows-1252.
    NewText = (NewText gConvLibUtf8ToWin1252[Code])
}

That code tests whether Utf8Code is an index of array gConvLibUtf8ToWin1252, and, if it is, it appends the value of that array entry to the text. The array is defined like this:

AWK
# The indexes below are the UTF-8 encodings converted to a decimal number.
# For example, 14844588 === 0xE282AC.
gConvLibUtf8ToWin1252[14844588] = sprintf("%c", 0x80) #   U+20AC   E2 82 AC   €
gConvLibUtf8ToWin1252[14844058] = sprintf("%c", 0x82) #   U+201A   E2 80 9A   ‚
gConvLibUtf8ToWin1252[50834]    = sprintf("%c", 0x83) #   U+0192   C6 92      ƒ
...
gConvLibUtf8ToWin1252[50579]    = sprintf("%c", 0x9C) #   U+0153   C5 93      œ
gConvLibUtf8ToWin1252[50622]    = sprintf("%c", 0x9E) #   U+017E   C5 BE      ž
gConvLibUtf8ToWin1252[50616]    = sprintf("%c", 0x9F) #   U+0178   C5 B8      Ÿ

The value passed to the sprintf() function is the index of the character in the Windows⁠-⁠1252 encoding scheme. The comments to the right of each entry show the character's Unicode code point, its UTF-8 encoding, and the actual character.

Note that AWK handles a sparse array like gConvLibUtf8ToWin1252 with ease. In AWK, all array indexes are actually treated as strings that are then hashed. So an AWK array is actually implemented as a hash table.

There is another aspect of the above function that may cause a bit of confusion. Earlier I said that the ISO Latin 1 characters all had Unicode code points of 0xFF or below, yet in the above function values in the ranges from 0xC2A0-0xC2BF and 0xC380-0xC3BF are seen as referring ISO Latin 1 characters. This is because those ranges cover the UTF-8 encodings of the Unicode code points from 0xA0-0xFF. In other words, the function above works on UTF-8 encodings of Unicode code points, not on the code points themselves.

If you are interested in how UNICODE code points are encoded in UTF-8 files, you should study this Wikipedia page.

Byte Order Marks

I'm guessing most of you are at least somewhat familiar with UTF-16 byte order marks (BOMs). Something I ran across while cleaning up my support for Unicode files is that BOMs are actually discouraged, especially in UTF-8 files.

For years I thought that a BOM was required in a UTF-16 file, but that likely shows that I have been using only Windows and UEFI for the past (let's just say many) years. If you create a UTF-16 formatted PsCal event file file, it must include a BOM. Like many good programmers, I am a bit lazy, and I really don't want to add code to try and decipher a file's encoding.

I found it easy enough to handle UTF-8 files that do include the BOM, however, so a UTF-8 formatted PsCal event file is acceptable with or without a BOM. Any event file missing a BOM is assumed to be a UTF-8 file.

Following is the definition of a function I added to pscal.awk. It is called after the first line of an event file (or some file it includes) has been read, but before any other action is taken. It really simplifies PsCal's handling of various input file formats.

AWK
################################################################################
## Decode a file's BOM and take appropriate action.
##
## LineIn should be the first line read from gEventsFile. If the BOM is one that
## is recognized, it is stripped from Line, then Line is assigned back to $0.
##
## If the BOM indicates a UTF-16 file, a library function is invoked to make an
## ASCII (UTF-8) copy which is then assigned to gEventsFile. The first line of
## this copy is then read into $0.
##
## If the BOM is NOT recognized, $0 is not modified and UTF-8 is assumed.
##
## RETURNS: The file type (gUtf32, gUtf16LE, gUtf16BE, gUtf8BOM, or gUtf8).
################################################################################
function _HandleBom(LineIn,                                         # Parameter
            Byte1,Byte2,Byte3,Byte4,UTF32)                          # Local vars
{
    Byte1 = LibOrd(substr(LineIn,1,1))
    Byte2 = LibOrd(substr(LineIn,2,1))
    Byte3 = LibOrd(substr(LineIn,3,1))
    Byte4 = LibOrd(substr(LineIn,4,1))

    if (Byte1 == 0xEF && Byte2 == 0xBB && Byte3 == 0xBF) {
        $0 = substr(LineIn,4)
        return gUtf8BOM
    }

    if ((Byte1 == 0xFE && Byte2 == 0xFF) || (Byte1 == 0xFF && Byte2 == 0xFE)) {
        close(gEventsFile)
        gEventsFile = LibOpenPossibleUtf16File(gEventsFile)
        getline < gEventsFile # Re-read line 1 without the BOM or encoding.
        return (Byte1 == 0xFF) ? gUtf16LE : gUtf16BE
    }

    UTF32 = Byte1 + lshift(Byte2,8) + lshift(Byte3,16) + lshift(Byte4,24)
    if (UTF32 == 0xFFFF0000 || UTF32 == 0x0000FFFF) {
        return gUtf32
    }

    return gUtf8
}

The global variable gEventsFile holds the name of the file whose first line was read into another global variable, $0, and assigned to the LineIn parameter before the _HandleBom function was called.

If the first 3 bytes of LineIn are those of the UTF-8 BOM, then they are stripped from $0 so the higher-level code never sees them.

When one of the UTF-16 BOMs is present, the event file is closed, then the LibOpenPossibleUtf16File() function is called to make a copy that AWK can handle properly. gEventsFile is redefined to reflect the name of the temporary file that the library function created. Finally, the getline function reads the first line of the file copy into the $0 global variable, so, again, the higher-level code has no idea that there was anything "odd" about the input file and does not have to deal with a BOM or multi-byte characters.

If no UTF-32 BOM is present, the function defaults to assuming the file is UTF-8 without a BOM.

At one time I considered treating UTF-8 files similar to UTF-16 files, by making a copy that had any multi-byte encoded Unicode code points of characters in the Windows⁠-⁠1252/ISO Latin 1 set converted into their character index. I ultimately decided to forgo this approach and instead call the LibConvertUtf8ToWin1252 library function as each line of a UTF-8 file is read in from an events file.

I was able to do this because PsCal uses a single function to read lines from an events file. That function handles several editing-type tasks, such as skipping over lines in a comment block, stripping certain white space and in-line comments, assembling continuation lines, and such. Handling multi-byte UTF-8 encoded characters here seemed like a reasonable task to add in, especially given my belief that most user event files will have UTF-8 encoding.

And now (with respect to the Monty Python crew) for something completely different...

New Text Effect Feature

One of the new features I added to PsCal recently is a way to select one of several text effects to be applied when rendering an event's text on some calendar day. You do that by adding an option to the event's definition like this: ;efx=<effect> Ultimately I ended up with 2 text glow effects, 4 text box effects, and a numeric effect that selects the percentage of black to use when rendering the text.

Text Glow

The text glow effects are named WGlow and BGlow, with the "W" and "B" indicating White and Black respectively. They display an event's text in black or white with an outer glow of sorts that is the opposite "color". These effects ensure that any text printed over top of an image will be readable, regardless of the particular image in the background.

I made WGlow PsCal's default effect. And because of that, you no longer have to lighten any background images that you create to display on a calendar day. That is to say, no matter the various shades of gray in some background image, the WGlow effect ensures that any text you may display over the image will be readable like this example from PsCal's 2020 example calendar:

Oppie

Text Boxes

The text box effects are named WBox, BBox, WWBox, and WBBox, and, as with the glow effects, the "W" and "B" preceding "Box" indicates a White or Black box respectively. The WBox and BBox effects draw a white or black box that is just big enough to fit the event text, which is then drawn in the opposite color.

The WWBox and WBBox effects operate like the previously described box effects, with the exception that the box they render is the full width of a day on the calendar—it is NOT fitted to the text's width.

Text Gray Shade

The final effect controls the shade of gray used to print event text. You specify the shade as an integral percentage of black, so ;efx=100 prints totally black text, ;efx=50 prints gray text, and ;efx=10 prints very light gray text.

Miscellaneous Event File Additions

I have added several new capabilities to user event files, some of which I introduce here.

Ones that I'll simply mention are: continuation lines, in-line comments on any line, multiple graphics functions allowed in a single event definition, comment blocks, and elimination of the need for "@" at the beginning of lines in the ps_functions and fonts sections.

Note that the changes mentioned above as well as those introduced below are all covered in detail in the EventFileFormat.pdf file located in PsCal's doc directory.

@month:<month> and @year:<year>

These two new keywords allow you to specify a default month and default year. When any event that follows these keywords doesn't specify the month or year of the event, PsCal assumes it occurs in the default month and year. These keywords can appear multiple times throughout an event file.

@include:[<drive>:][<path>;]<file name>

The include keyword allows you to include other files as if they were part of the event file. I think this is most useful for including PostScript images that you want to place on your calendars. You might also decide to use this feature to maintain each year's events in separate files.

@include_dir:[<drive>:]<path>[;[<drive>:]<path>[;...]]

The include_dir keyword allows you to define one or more directories where PsCal should look for include files. Any include file reference that does not contain path information is assumed to be located in the current directory. If it does not exist in the current directory,and if there is a preceding include_dir keyword, then PsCal will search for the include file in the directories defined by the include_dir keyword (in the order they are listed).

Conclusion

First, my sincere thanks to anyone who had the stamina to get through this article! I hope you were able to learn something from it that will be of use to you in the future.

Adding support to PsCal for languages other than English was a great learning opportunity for me, as well as being quite a bit of fun. I owe a debt of gratitude to FrogCoder5 for the push! Because of the work I put into the effort, I gained an even better appreciation for the design of the PostScript language and a much better understanding of Unicode files.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Retired
United States United States
In my teens I enlisted in the U.S. Air Force as an imagery interpreter. While in the USAF, I graduated from The University of Texas at Austin with a BS degree in Electrical Engineering concentrating in computer engineering. I then got a commission and finished out my USAF career as a computer wargaming programmer.

After about 13 years, I left the Air Force for Texas Instruments, first working on a military project, and later working with ASICs, PC chipsets, and notebook BIOS.

When TI sold their notebook division to Acer, I moved on to Dell working as a notebook BIOS engineer, Windows programmer, and server BIOS engineer.

I love hiking in Utah, especially The Needles area of Canyonlands NP. I enjoy instrumental post-rock music, disc golf, and books about J. Robert Oppenheimer and the Manhattan project.

My favorite math fact: The limit as N approaches infinity of (1 + x/N)^N = e^x

I am retired now and loving it!

Comments and Discussions

 
GeneralMy vote of 5 Pin
Ștefan-Mihai MOGA9-May-24 19:12
professionalȘtefan-Mihai MOGA9-May-24 19:12 
GeneralRe: My vote of 5 Pin
FormerBIOSGuy10-May-24 1:54
FormerBIOSGuy10-May-24 1:54 
GeneralMy vote of 5 Pin
E. Anderson7-May-24 10:51
E. Anderson7-May-24 10:51 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.