Anybody know how to get ahold of SAM76 source code for Linux? - programming-languages

resistors.org site and foxthompson.net download links are stale/broken.
http://www.resistors.org/index.php/The_SAM76_programming_language
Every other link I've been able to track down on the 'net (mostly in old newsgroup posts) are broken. E-mails to the respective webmasters all bounced.
I have a morbid curiosity for arcane programming languages, and SAM76 sounded really interesting to look into and mess around with.
There are quite a few lisp folks lurking on this site, so figured somebody might have a lead... as I heard SAM76 had some early redimentary functional programming ideas.
Extra credit: link to track down a copy of the SAM76 manual!

Wayback has a copy of S76.exe for DOS and Windows
http://web.archive.org/web/20070505122813/http://www.resistors.org/index.php/The_SAM76_programming_language
http://wikivisually.com/wiki/SAM76
http://encycl.opentopia.com/term/Sam76
http://encycl.opentopia.com/term/Algorithms_in_Sam76
======================= F R E E W A R E =======================
User-Supported Software
If you are using this program and find it to be of value
your $20 contribution will be appreciated.
A contribution of $30 will bring you the SAM76 language
manual and other useful and interesting documentation.
SAM76 Inc., Box 257 RR1
Pennington, N.J., 08534
U.S.A.
Regardless of whether you make a contribution,
you are encouraged to copy and share this program.
> ---------------------------------------------------
http://web.archive.org/web/20110726163455/http://www.hypernews.org/HyperNews/get/computing/lang-list/2/2/1.html
I believe the R.E.S.I.S.T.O.R.s (have no idea what the letters
mean) was a group of kids who played with computers and
electronics in Claude Kagan's barn in Pennington, N.J. near
Princeton. Because the developer of TRAC, Calvin Mooers,
spent the rest of his life inventing the software patent and
sued everyone in sight, Claude (whose employer, Western
Electric Laboratories was sued by Mooers) created a very
similar language called "SAM76" supposedly based on S7 and M6
"languages from Bell Labs". I have the original tutorial
manual written and illustrated by the R.E.S.... and versions
on paper tape for the Altair and TRS-80 floppy disk. I think
it looked more like #os#is;; but you could change all the
special characters and command names so it could be made to
look EXACTLY like TRAC. Claude wrote some neat graphic games
for the TRS-80 in SAM76/TRAC.
http://web.archive.org/web/20110726163335/http://www.hypernews.org/HyperNews/get/computing/lang-list/2/2/1/3.html
Yes, we RESISTORS did indeed meet in Claude's barn which was filled with old telephone and computer equipment. Claude's version of TRAC started on the PDP-8, migrated to the PDP-10, and for the legal reasons mentioned ended up as SAM-76. (FYI, SAM stands either for "Strachey and McIlroy" or "Same As Mooers". RESISTORS always stood for "Radically Emphatic Students Interested in Science, Technology, and Other Research Studies" as much as it stood for anything.
Starting when we were members of the RESISTORS, Peter Eichenberger and I wrote a PDP-10 TRAC processor and later reimplemented it for the PDP-11, eventually adding a little multi-terminal time-sharing monitor. We kept a lower profile than Western Electric (either that, or as 19 year olds we had no noticable assets) so we and Mooers stayed on cordial terms.

I don't know if this is useful, but on this page there is an email adress dsf#hci.ucsd.edu which seems to be Dave Fox's one, the guy who maintained the page hosting the SAM76 file.

There's a pile of information in the old SIMTEL archives, specifically CPMUG Volume 34, which is included in the nearly 13G download here including example code. You have your choice of "DSK" and "ARK" (ARC) format images. The standard {file} utility knows what format it's in {CPMUG034.ARC: ARC archive data, dynamic LZW} SIG/M v. 53 also has SAM76 information and you can find it here.

Related

Programming Traditional Artists' Materials and Tools

Where is the Body of Knowledge for programmers interested in developing applications that simulate traditional artists' materials and tools, such as simulating natural paints?
Is there any substantial body of knowledge or resource for software engineers interested in creating applications that reproduce the effect of painting and drawing media such as watercolor, oil, chalk, charcoal and color pencil?
Clearly the knowledge exists and is shared by software engineers at Adobe, Corel, etc. But out here in the open, where is this information?
So far I've only come across fragmentary knowledge of a little technique here or there, but have not yet found any substantial resource. If you know where I need to look, please point me there.
Where are the best academic resources? Are there any blogs that specialize in this area? Are their organizations that specialize in this?
Eureka! NPR - for those of us heretofore uninitiated - refers to Non-Photorealistic Rendering, the "area of computer graphics that makes images
resembling traditional artistic works (Mould)."
ACM Digital Library appears to be a very extensive source for research and academic papers on NPR. See here, here, and here for a good index of NPAR papers, search for 'NPR' or non-photorealistic rendering' on this page.
This page at the ACM Digital Library show an example of ways to access the material - a member (who has paid annual membership fee) can purchase the paper for $10, a non-member for $15; Alternatively, you can rent an article for such as this for $2.99 for 24 hours.
Non-photorealistic Rendering, Bruce Gooch, Amy Gooch - 2001: A preview of this book is on Google Books and it looks like a bulls-eye for subject matter expertise.
Non-Photorealistic Computer Graphics: Modeling, Rendering, and Animation (The Morgan Kaufmann Series in Computer Graphics) by Thomas Strothotte. This book looks like a goldmine. It covers things like stippling, drawing incorrect lines, drawing artistic lines, simulating painting with wet paint, simulating pencils, strokes and textures, etc.
Microsoft and Adobe have a visible presence in this field of work, and of course employ people who are prominently involved in the NPR field, and you will see their names (or that of their employers') often appear as participants and sponsors of events like the upcoming NPAR 2011 (Sponsored by adobe).
Microsoft has the impressive Project Gustav (video) - a software application dedicated exclusively to very realistically simulating traditional artistic tools such as paint, chalk, pencil, etc.
Detail Preserving Paint Modeling of 3D Brushes.

Pivotal Suboptimal Decisions in the History of Software [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Throughout the history of software development, it sometimes happens that some person (usually unknown, probably unwittingly) made what, at the time, seemed a trivial, short-term decision that changed the world of programming. What events of this nature come to mind, and what have been our industry's response to mitigate the pain?
Illustration (the biggest one I can think of): When IBM designed the original PC, and decided to save a couple dollars in manufacturing costs by choosing the half-brain-dead 8088 with 8-bit-addressable memory, instead of one of the 16-bit options (8086, 680n, etc.), dooming us to 20 years of address offset calculations.
(In response, a lot of careers in unix platform development were begun.)
Somewhere toward the other end of the scale lies the decision someone made to have a monster Shift Lock key at the left end of the keyboard, instead of a Ctrl key.
Paul Allen deciding to use the / character for command line options in MS DOS.
Allocating only 2 digits for the year field.
And the mitigation was to spend huge amounts of money and time just before the fields overflowed to extend them and fix the code.
Ending Alan Turing's career when he was only 42.
Microsoft deciding to use backslash rather than forwardslash as the path delimiter. And failing to virtualize the drive letter.
Actually the 8088 & 8086 have same memory model and same number of address bits (20). Only difference is width of external data bus which is 8 bit for 8088 & 16 bit for 8086.
I would say that use of inconsistent line endings by different operating systems (\n - UNIX, \r\n - DOS, \r - Mac) was a bad decision. Eventually Apple relented by making \n default for OS-X but Microsoft is stubbornly sticking to \r\n. Even in Vista, Notepad can not properly display a text file using \n as line ending.
Best example of this problem is the ASCII mode of FTP which just adds /r to each /n in a file transferred from a UNIX server to Windows client even though the file originally contained /r/n.
There were a lot of suboptimal decisions in the design of C (operator precedence, the silly case statement, etc.), that are embedded in a whole lot of software in many languages (C, C++, Java, Objective-C, maybe C# - not familiar with that one).
I believe Dennis Ritchie remarked that he rethought precedence fairly soon, but wasn't going to change it. Not with a whole three installations and hundreds of thousands of lines of source code in the world.
Deciding that HTML should be used for anything other than marking up hypertext documents.
Microsoft's decision to use "C:\Program Files" as the standard folder name where programs should be installed in Windows. Suddenly working from a command prompt became much more complicated because of that wordy location with an embedded space. You couldn't just type:
cd \program files\MyCompany\MyProgram
Anytime you have a space in a directory name, you have to encase the entire thing in quotes, like this:
cd "\program files\MyCompany\MyProgram"
Why couldn't they have just called it c:\programs or something like that?
Apple ousting Steve Jobs (the first time) to be led by a succession of sugar-water salemen and uninspired and uninspiring bean counters.
Gary Kildall not making a deal with IBM to license CP/M 86 to them, so they wouldn't use MS-DOS.
HTML as a browser display language.
HTML was originally designed a content markup language, whose goal was to describe the contents of a document without making too many judgments about how that document should be displayed. Which was great except that appearance is very important for most web pages and especially important for web applications.
So, we've been patching HTML ever since with CSS, XHTML, Javascript, Flash, Silverlight and Ajax all in order to provide consistent cross-browser display rendering, dynamic content and the client-side intelligence that web applications demand.
How many times have you wished that browser control languages had been done right in the first place?
Microsoft's decision not to add *NIX-like execute/noexecute file permissions and security in MS-DOS. I'd say that ninety percent of the windows viruses (and spyware) that we have today would be eliminated if every executable file needed to be marked as executable before it can even execute (and much less wreak havoc) on a system.
That one decision alone gave rise to the birth of the Antivirus industry.
Using 4 bytes for time_t and in the internet protocols' timestamps.
This has not bitten us yet - give it a bit more time.
Important web sites like banks still using "security questions" as secondary security for people who forget their passwords. Ask Sarah Palin how well that works when everybody can look up your mother's maiden name on Wikipedia. Or better yet, find the blog post that Bruce Schneier wrote about it.
EBCDIC, the IBM "standard" character set for mainframes. The collation sequence was "insane" (the letters of the alphabet are not contiguous).
Lisp's use of the names "CAR" and "CDR" instead of something reasonable for those basic functions.
Null References - a billion dollar mistake.
Netscape's decision to rewrite their browser from scratch. This is arguably one of the factors that contributed to Internet Explorer running away with browser market share between Netscape 4.0 and Netscape 6.0.
DOS's 8Dot3 file names, and Windows' adoption of using the file extension to determine what application to launch.
Using the qwerty keyboard on computers instead of dvorak.
Thinking that a password would be a neat way to control access.
Every language designer who has made their syntax different when the only reason was "just to be different". I'm thinking of S and R, where comments start with #, and _ is an assignment operator.
Microsoft copying the shortcut keys from the original Mac but using Ctrl instead of a Command key for Undo, Cut, Copy, Paste, etc. (Z, X, C, V, etc.), and adding a near worthless Windows key in the thumb position that does almost nothing compared to the pinky's numerous Ctrl key duties. (Modern Macs get a useful Ctrl key (for terminal commands), and a Command key in the thumb position (for program or system shortcuts) and an Alt (Option) key for typing weird characters.)
(See this article.)
Null-terminated strings
7-bits for text. And then "fixing" this with code pages. Encoding issues will kill me some day.
Deciding that "network order" for multi-byte numbers in the Internet Protocol would be high order byte first.
(At the time the heterogenous nature of the net meant this was a coin toss decision. Thirty years later, Intel-derived processors so completely dominate the marketplace it seems lower-order-byte first would have been a better choice).
Netscape's decision to support Java in their browser.
Microsoft's decision to base Window NT on DEC VMS instead of Unix.
The term Translation Lookaside Buffer (which should be called something along the lines of Page Cache or Address Cache).
Having a key for Caps Lock instead of for Shift Lock, in effect it's a Caps Reverse key, but with Shift Lock it could have been controllable.

Where can I find the graph of TeX's error log?

In Donald Knuth's Literate Programming, there was if I remember correctly a graph showing the evolution of TeX's number of bugs over time. This graph has remained flat for the past decade or so, suggesting that TeX might now be bug-free.
I would like to use this graph to illustrate the importance of bug-tracking software. Is it downloadable from somewhere?
The graphs I think you are referring to are in chapter 10 of Literate Programming (Knuth, D. E., 1992, Center for the Study of Language and Information) which is a reprint from Knuth, D. E., 1989, The errors of TEX. Softw. Pract. Exper. 19, 7 (Jul. 1989), 607-685.
I have not seen the graphs other than in book form but an updated list of errors is in a PDF at http://tug.org/texlive/Contents/live/texmf-doc/doc/english/knuth/errata/errorlog.pdf. Whereas the list in chapter 11 of my copy of Literate Programming covers 1978 to 1991, the PDF extends this to 2002. If you have installed TeX Live another version of this file, up to 1995, will probably be on your system as knuth/errorlog.tex.gz.
A PDF of a note on the list is at http://www.tug.org/TUGboat/Articles/tb10-4/tb26knut.pdf, a TUG conference keynote address from 1989.
As all the errors are numbered it could be a quick manual process to produce a rough (as the list is not in strict numerical order) graph by, for example, month. The wider range and content of all the graphs in the chapter would be a longer undertaking but perhaps an interesting programming exercise (the format of the TeX source may be much easier than the PDF for this).
Side Note:
Code is never 'bug free'. There are only "Expected quirks" and "Bugs yet to be discovered".

HCI: UI beyond the WIMP Paradigm

With the popularity of the Apple iPhone, the potential of the Microsoft Surface, and the sheer fluidity and innovation of the interfaces pioneered by Jeff Han of Perceptive Pixel ...
What are good examples of Graphical User Interfaces which have evolved beyond the
Windows, Icons, ( Mouse / Menu ), and Pointer paradigm ?
Are you only interested in GUIs? A lot of research has been done and continues to be done on tangible interfaces for example, which fall outside of that category (although they can include computer graphics). The User Interface Wikipedia page might be a good place to start. You might also want to explore the ACM CHI Conference. I used to know some of the people who worked on zooming interfaces; the Human Computer Interaction Lab an the University of Maryland also has a bunch of links which you may find interesting.
Lastly I will point out that a lot of innovative user interface ideas work better in demos than they do in real use. I bring that up because your example, as a couple of commenters have pointed out, might, if applied inappropriately, be tiring to use for any extended period of time. Note that light pens were, for the most part, replaced by mice. Good design sometimes goes against naive intuition (mine anyway). There is a nice rant on this topic with regard to 3d graphics on useit.com.
Technically, the interface you are looking for may be called Post-WIMP user interfaces, according to a paper of the same name by Andries van Dam. The reasons why we need other paradigms is that WIMP is not good enough, especially for some specific applications such as 3D model manipulation.
To those who think that UI research builds only cool-looking but non-practical demos, the first mouse was bulky and it took decades to be prevalent. Also Douglas Engelbart, the inventor, thought people would use both mouse and (a short form of) keyboard at the same time. This shows that even a pioneer of the field had a wrong vision about the future.
Since we are still in WIMP era, there are diverse comments on how the future will be (and most of them must be wrong.) Please search for these keywords in Google for more details.
Programming by example/demonstration
In short, in this paradigm, users show what they want to do and computer will learn new behaviors.
3D User Interfaces
I guess everybody knows and has seen many examples of this interface before. Despite a lot of hot debates on its usefulness, a part of 3D interface ongoing research has been implemented into many leading operating systems. The state of the art could be BumpTop. See also: Zooming User Interfaces
Pen-based/Sketch-based/Gesture-based Computing
Though this interface may use the same hardware setup like WIMP but, instead of point-and-click, users command through strokes which are information-richer.
Direct-touch User Interface
This is ike Microsoft's Surface or Apple's iPhone, but it doesn't have to be on tabletop. The interactive surface can be vertical, say wall, or not flat.
Tangible User Interface
This has already been mentioned in another answer. This can work well with touch surface, a set of computer vision system, or augmented reality.
Voice User Interface, Mobile computing, Wearable Computers, Ubiquitous/Pervasive Computing, Human-Robot Interaction, etc.
Further information:
Noncommand User Interface by Jakob Nielsen (1993) is another seminal paper on the topic.
If you want some theoretical concepts on GUIs, consider looking at vis, by Tuomo Valkonen. Tuomo has been extremely critical of WIMP concept for a long, he has developed ion window manager, which is one of many tiling window managers around. Tiling WMs are actually a performance win for the user when used right.
Vis is the idea of an UI which actually adapts to the needs of the particular user or his environment, including vision impairment, tactile preferences (mouse or keyboard), preferred language (to better suit right-to-left languages), preferred visual presentation (button order, mac-style or windows-style), better use of available space, corporate identity etc. The UI definition is presentation-free, the only things allowed are input/output parameters and their relationships. The layout algorithms and ergonomical constraints of the GUI itself are defined exactly once, at system level and in user's preferences. Essentially, this allows for any kind of GUI as long as the data to be shown is clearly defined. A GUI for a mobile device is equally possible as is a text terminal UI and voice interface.
How about mouse gestures?
A somewhat unknown, relatively new and highly underestimated UI feature.
They tend to have a somewhat steeper learning curve then icons because of the invisibility (if nobody tells you they exist, they stay invisible), but can be a real time saver for the more experienced user (I get real aggrevated when I have to browse without mouse gestures).
It's kind of like the hotkey for the mouse.
Sticking to GUIs puts limits on the physical properties of the hardware. Users have to be able to read a screen and respond in some way. The iPhone, for example: It's interface is the whole top surface, so physical size and the IxD are opposing factors.
Around Christmas I wrote a paper exploring the potential for a wearable BCI-controlled device. Now, I'm not suggesting we're ready to start building such devices, but the lessons learnt are valid. I found that most users liked the idea of using language as the primary interaction medium. Crucially though, all expressed concerns about ambiguity and confirmation.
The WIMP paradigm is one that relies on very precise, definite actions - usually button pressing. Additionally, as Nielsen reminds us, good feedback is essential. WIMP systems are usually pretty good at (or at least have the potential to) immediately announcing the receipt and outcome of a users actions.
To escape these paired requirements, it seems we really need to write software that users can trust. This might mean being context aware, or it might mean having some sort of structured query language based on a subset of English, or it might mean something entirely different. What it certainly means though, is that we'd be free of the desktop and finally be able to deploy a seamlessly integrated computing experience.
NUI Group people work primarily on multi-touch interfaces and you can see some nice examples of modern, more human-friendly designs (not counting the endless photo-organizing-app demos ;) ).
People are used to WIMP, the other main issue is that most of the other "Cool" interfaces require specialized hardware.
I'm not in journalism; I write software for a living.
vim!
It's definitely outside the realm of WIMP, but whether it's beyond it or way behind it is up to judgment!
I would recommend the following paper:
Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 201-210. see DOI

Writing Color Calibration Data to a TIFF or PNG file

My custom homebrew photography processing software, running on 64 bit Linux/GNU, writes out PNG and TIFF files. These are to be sent to a quality printing shop to be made into fine art. Working with interior designers - it's important to get the colors just right!
The print shops usually have no trouble with TIFF and PNGs made from commercial software such as Photoshop. Even though i have the TIFF 6.0 specs, PNG specs, and other info in hand, it is not clear how to include color calibration data or implement color management system on linux. My files are often rejected as faulty, without sufficient error reports to make fixes.
This has been a nasty problem for a while for many. Even my contacts at the Hollywood postproduction studios are struggling with this issue. One studio even wanted to hire me to take care of their color calibration, thinking i was the expert - but no, i am just as blind and lost as everyone!
Does anyone know of good code examples, detailed technical information, or have any other enlightenment? Or time to switch to pure Apple?
Take a look at LittleCMS
http://www.littlecms.com/
This page has the code for applying it to TIFF
http://www.littlecms.com/newutils.htm
The basic thing you need to know is that Color profile data is something you need to store in the meta-data of the file itself.
There is a consultant called Charles Poynton who specialises in this area. I work for one of the post production studios you mention (albeit in london not hollywood), and have seen him speak on the subject a couple of times. His website contains a lot of the material he presents and you might find something of use there. He also has a book called Digital Video and HDTV Algorithms and Interfaces which is not as heavy as the title might suggest! While these resources might not answer your question directly, it might provide a spring board to other solutions.
More specifically, which libraries are you using to write the png and tif files - you mention they are homebrew, but how custom are they exactly? Postprocessing the images in an image manipulation program (such as ImageMagick or dcraw) might allow you to inject this information into the header more successfully.
Sorry, I don't have any specific answers, but maybe something that will point you a bit further in the right direction...
As a GNU/Linux user, you’ll want to consider DispcalGUI – http://dispcalgui.hoech.net/ – a GNOME-based GUI that centralizes color management, ICC profile management, and (crucially for your case) device calibration. It can talk to well-known pro- and mid-level hardware, e.g, i1, X-Rite, Spyder, etc.
But before you get into that – you say you are generating your files to spec; are you validating your output using a test suite specific to the format in question? If not, here are three to get you started:
imagetestsuite supports the well-known formats: https://code.google.com/p/imagetestsuite/w/list?can=1&q=
The Luminous* test suite is a JIRA plugin, if that’s your thing: https://marketplace.atlassian.com/plugins/com.luminouslead.plugin.jira.testsuite.LuminousTestSuite
FLOSS Decoder implementations often have one you can use, i.e. OpenJPEG – https://code.google.com/p/openjpeg/wiki/TestSuiteDocumentation
But even barring all of those, it seems like your problem is with embedded ICC data – which is two specs in one. First, there’s the host image-file format, and they all handle embedding differently (meaning the ICC data will likely look totally different when embedded in a TIFF than, say, a JPEG or WebP file). Second, there is the ICC spec itself. It is documented here: http://color.org/v4spec.xalter – and you may also want to look at the source for the aforementioned dispcalGUI, which includes a very legible and hackable ICC profile class in Python: http://sourceforge.net/p/dispcalgui/code/HEAD/tree/trunk/dispcalGUI/ICCProfile.py
Full disclosure: I have contributed to that very ICC profile class, to which I just linked in that last ¶
That’s the basics (many of which you have no doubt covered)... beyond that, if you post more information about what exactly is going wrong, I’d be interested to look it over. Good luck with it either way.
* NB. This project is unrelated to the long-standing photography website, “the Luminous Landscape”

Resources