Which direction do you read binary? | Study.com

Differences between LISP 1.5 and Common Lisp, Part 1:

[Edit: I didn't mean to put a colon in the title.]
In this post we'll be looking at some of the things that make LISP 1.5 and Common Lisp different. There isn't too much surviving LISP 1.5 code, but some of the code that is still around is interesting and worthy of study.
Here are some conventions used in this post of which you might take notice:
Sources are linked sometimes below, but here is a list of links that were helpful while writing this:
The differences between LISP 1.5 and Common Lisp can be classified into the following groups:
  1. Superficial differences—matters of syntax
  2. Conventional differences—matters of code style and form
  3. Fundamental differences—matters of semantics
  4. Library differences—matters of available functions
This post will go through the first three of these groups in that order. A future post will discuss library differences, except for some functions dealing with character-based input and output, since they are a little world unto their own.
[Originally the library differences were part of this post, but it exceeded the length limit on posts (40000 characters)].

Superficial differences.

LISP 1.5 was used initially on computers that had very limited character sets. The machine on which it ran at MIT, the IBM 7090, used a six-bit, binary-coded decimal encoding for characters, which could theoretically represent up to sixty-four characters. In practice, only fourty-six were widely used. The repertoire of this character set consisted of the twenty-six uppercase letters, the nine digits, the blank character '', and the ten special characters '-', '/', '=', '.', '$', ',', '(', ')', '*', and '+'. You might note the absence of the apostrophe/single quote—there was no shorthand for the quote operator in LISP 1.5 because no sensical character was available.
When the LISP 1.5 system read input from cards, it treated the end of a card not like a blank character (as is done in C, TeX, etc.), but as nothing. Therefore the first character of a symbol's name could be the last character of a card, the remaining characters appearing at the beginning of the next card. Lisp's syntax allowed for the omission of almost all whitespace besides that which was used as delimiters to separate tokens.
List syntax. Lists were contained within parentheses, as is the case in Common Lisp. From the beginning Lisp had the consing dot, which was written as a period in LISP 1.5; the interaction between the period when used as the consing dot and the period when used as the decimal point will be described shortly.
In LISP 1.5, the comma was equivalent to a blank character; both could be used to delimit items within a list. The LISP I Programmer's Manual, p. 24, tells us that
The commas in writing S-expressions may be omitted. This is an accident.
Number syntax. Numbers took one of three forms: fixed-point integers, floating-point numbers, and octal numbers. (Of course octal numbers were just an alternative notation for the fixed-point integers.)
Fixed-point integers were written simply as the decimal representation of the integers, with an optional sign. It isn't explicitly mentioned whether a plus sign is allowed in this case or if only a minus sign is, but floating-point syntax does allow an initial plus sign, so it makes sense that the fixed-point number syntax would as well.
Floating-point numbers had the syntax described by the following context-free grammar, where a term in square brackets indicates that the term is optional:
float: [sign] integer '.' [integer] exponent [sign] integer '.' integer [exponent] exponent: 'E' [sign] digit [digit] integer: digit integer digit digit: one of '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' sign: one of '+' '-' 
This grammar generates things like 100.3 and 1.E5 but not things like .01 or 14E2 or 100.. The manual seems to imply that if you wrote, say, (100. 200), the period would be treated as a consing dot [the result being (cons 100 200)].
Floating-point numbers are limited in absolute value to the interval (2-128, 2128), and eight digits are significant.
Octal numbers are defined by the following grammar:
octal: [sign] octal-digits 'Q' [integer] octal-digits: octal-digit [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] octal-digit: one of '0' '1' '2' '3' '4' '5' '6' '7' 
The optional integer following 'Q' is a scale factor, which is a decimal integer representing an exponent with a base of 8. Positive octal numbers behave as one would expect: The value is shifted to the left 3×s bits, where s is the scale factor. Octal was useful on the IBM 7090, since it used thirty-six-bit words; twelve octal digits (which is the maximum allowed in an octal number in LISP 1.5) thus represent a single word in a convenient way that is more compact than binary (but still being easily convertable to and from binary). If the number has a negative sign, then the thirty-sixth bit is logically ored with 1.
The syntax of Common Lisp's numbers is a superset of that of LISP 1.5. The only major difference is in the notation of octal numbers; Common Lisp uses the sharpsign reader macro for that purpose. Because of the somewhat odd semantics of the minus sign in octal numbers in LISP 1.5, it is not necessarily trivial to convert a LISP 1.5 octal number into a Common Lisp expression resulting in the same value.
Symbol syntax. Symbol names can be up to thirty characters in length. While the actual name of a symbol was kept on its property list under the pname indicator and could be any sequence of thirty characters, the syntax accepted by the read program for symbols was limited in a few ways. First, it must not begin with a digit or with either of the characters '+' or '-', and the first two characters cannot be '$'. Otherwise, all the alphanumeric characters, along with the special characters '+', '-', '=', '*', '/', and '$'. The fact that a symbol can't begin with a sign character or a digit has to do with the number syntax; the fact that a symbol can't begin with '$$' has to do with the mechanism by which the LISP 1.5 reader allowed you to write characters that are usually not allowed in symbols, which is described next.
Two dollar signs initiated the reading of what we today might call an "escape sequence". An escape sequence had the form "$$xSx", where x was any character and S was a sequence of up to thirty characters not including x. For example, $$x()x would get the symbol whose name is '()' and would print as '()'. Thus it is similar in purpose to Common Lisp's | syntax. There is a significant difference: It could not be embedded within a symbol, unlike Common Lisp's |. In this respect it is closer to Maclisp's | reader macro (which created a single token) than it is to Common Lisp's multiple escape character. In LISP 1.5, "A$$X()X$" would be read as (1) the symbol A$$X, (2) the empty list, (3) the symbol X.
The following code sets up a $ reader macro so that symbols using the $$ notation will be read in properly, while leaving things like $eof$ alone.
(defun dollar-sign-reader (stream character) (declare (ignore character)) (let ((next (read-char stream t nil t))) (cond ((char= next #\$) (let ((terminator (read-char stream t nil t))) (values (intern (with-output-to-string (name) (loop for c := (read-char stream t nil t) until (char= c terminator) do (write-char c name))))))) (t (unread-char next stream) (with-standard-io-syntax (read (make-concatenated-stream (make-string-input-stream "$") stream) t nil t)))))) (set-macro-character #\$ #'dollar-sign-reader t) 

Conventional differences.

LISP 1.5 is an old programming language. Generally, compared to its contemporaries (such as FORTRANs I–IV), it holds up well to modern standards, but sometimes its age does show. And there were some aspects of LISP 1.5 that might be surprising to programmers familiar only with Common Lisp or a Scheme.
M-expressions. John McCarthy's original concept of Lisp was a language with a syntax like this (from the LISP 1.5 Programmer's Manual, p. 11):
equal[x;y]=[atom[x]→[atom[y]→eq[x;y]; T→F]; equal[car[x];car[Y]]→equal[cdr[x];cdr[y]]; T→F] 
There are several things to note. First is the entirely different phrase structure. It's is an infix language looking much closer to mathematics than the Lisp we know and love. Square brackets are used instead of parentheses, and semicolons are used instead of commas (or blanks). When square brackets do not enclose function arguments (or parameters when to the left of the equals sign), they set up a conditional expression; the arrows separate predicate expressions and consequent expressions.
If that was Lisp, then where do s-expressions come in? Answer: quoting. In the m-expression notation, uppercase strings of characters represent quoted symbols, and parenthesized lists represent quoted lists. Here is an example from page 13 of the manual:
λ[[x;y];cons[car[x];y]][(A B);(C D)] 
As an s-expressions, this would be
((lambda (x y) (cons (car x) y)) '(A B) '(C D)) 
The majority of the code in the manual is presented in m-expression form.
So why did s-expressions stick? There are a number of reasons. The earliest Lisp interpreter was a translation of the program for eval in McCarthy's paper introducing Lisp, which interpreted quoted data; therefore it read code in the form of s-expressions. S-expressions are much easier for a computer to parse than m-expressions, and also more consistent. (Also, the character set mentioned above includes neither square brackets nor a semicolon, let alone a lambda character.) But in publications m-expressions were seen frequently; perhaps the syntax was seen as a kind of "Lisp pseudocode".
Comments. LISP 1.5 had no built-in commenting mechanism. It's easy enough to define a comment operator in the language, but it seemed like nobody felt a need for them.
Interestingly, FORTRAN I had comments. Assembly languages of the time sort of had comments, in that they had a portion of each line/card that was ignored in which you could put any text. FORTRAN was ahead of its time.
(Historical note: The semicolon comment used in Common Lisp comes from Maclisp. Maclisp likely got it from PDP-10 assembly language, which let a semicolon and/or a line break terminate a statement; thus anything following a semicolon is ignored. The convention of octal numbers by default, decimal numbers being indicated by a trailing decimal point, of Maclisp too comes from the assembly language.)
Code formatting. The code in the manual that isn't written using m-expression syntax is generally lacking in meaningful indentation and spacing. Here is an example (p. 49):
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Nowadays we might indent it like so:
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Part of the lack of formatting stems probably from the primarily punched-card-based programming world of the time; you would see the indented structure only by printing a listing of your code, so there is no need to format the punched cards carefully. LISP 1.5 allowed a very free format, especially when compared to FORTRAN; the consequence is that early LISP 1.5 programs are very difficult to read because of the lack of spacing, while old FORTRAN programs are limited at least to one statement per line.
The close relationship of Lisp and pretty-printing originates in programs developed to produce nicely formatted listings of Lisp code.
Lisp code from the mid-sixties used some peculiar formatting conventions that seem odd today. Here is a quote from Steele and Gabriel's Evolution of Lisp:
This intermediate example is derived from a 1966 coding style:
DEFINE(( (MEMBER (LAMBDA (A X) (COND ((NULL X) F) ((EQ A (CAR X) ) T) (T (MEMBER A (CDR X))) ))) )) 
The design of this style appears to take the name of the function, the arguments, and the very beginning of the COND as an idiom, and hence they are on the same line together. The branches of the COND clause line up, which shows the structure of the cases considered.
This kind of indentation is somewhat reminiscent of the formatting of Algol programs in publications.
Programming style. Old LISP 1.5 programs can seem somewhat primitive. There is heavy use of the prog feature, which is related partially to the programming style that was common at the time and partially to the lack of control structures in LISP 1.5. You could express iteration only by using recursion or by using prog+go; there wasn't a built-in looping facility. There is a library function called for that is something like the early form of Maclisp's do (the later form would be inherited in Common Lisp), but no surviving LISP 1.5 code uses it. [I'm thinking of making another post about converting programs using prog to the more structured forms that Common Lisp supports, if doing so would make the logic of the program clearer. Naturally there is a lot of literature on so called "goto elimination" and doing it automatically, so it would not present any new knowledge, but it would have lots of Lisp examples.]
LISP 1.5 did not have a let construct. You would use either a prog and setq or a lambda:
(let ((x y)) ...) 
is equivalent to
((lambda (x) ...) y) 
Something that stands out immediately when reading LISP 1.5 code is the heavy, heavy use of combinations of car and cdr. This might help (though car and cdr should be left alone when they are used with dotted pairs):
(car x) = (first x) (cdr x) = (rest x) (caar x) = (first (first x)) (cadr x) = (second x) (cdar x) = (rest (first x)) (cddr x) = (rest (rest x)) (caaar x) = (first (first (first x))) (caadr x) = (first (second x)) (cadar x) = (second (first x)) (caddr x) = (third x) (cdaar x) = (rest (first (first x))) (cdadr x) = (rest (second x)) (cddar x) = (rest (rest (first x))) (cdddr x) = (rest (rest (rest x))) 
Here are some higher compositions, even though LISP 1.5 doesn't have them.
(caaaar x) = (first (first (first (first x)))) (caaadr x) = (first (first (second x))) (caadar x) = (first (second (first x))) (caaddr x) = (first (third x)) (cadaar x) = (second (first (first x))) (cadadr x) = (second (second x)) (caddar x) = (third (first x)) (cadddr x) = (fourth x) (cdaaar x) = (rest (first (first (first x)))) (cdaadr x) = (rest (first (second x))) (cdadar x) = (rest (second (first x))) (cdaddr x) = (rest (third x)) (cddaar x) = (rest (rest (first (first x)))) (cddadr x) = (rest (rest (second x))) (cdddar x) = (rest (rest (rest (first x)))) (cddddr x) = (rest (rest (rest (rest x)))) 
Things like defstruct and Flavors were many years away. For a long time, Lisp dialects had lists as the only kind of structured data, and programmers rarely defined functions with meaningful names to access components of data structures that are represented as lists. Part of understanding old Lisp code is figuring out how data structures are built up and what their components signify.
In LISP 1.5, it's fairly common to see nil used where today we'd use (). For example:
(LAMBDA NIL ...) 
instead of
(LAMBDA () ...) 
or (PROG NIL ...)
instead of
(PROG () ...) 
Actually this practice was used in other Lisp dialects as well, although it isn't really seen in newer code.
Identifiers. If you examine the list of all the symbols described in the LISP 1.5 Programmer's Manual, you will notice that none of them differ only in the characters after the sixth character. In other words, it is as if symbol names have only six significant characters, so that abcdef1 and abcdef2 would be considered equal. But it doesn't seem like that was actually the case, since there is no mention of such a limitation in the manual. Another thing of note is that many symbols are six characters or fewer in length.
(A sequence of six characters is nice to store on the hardware on which LISP 1.5 was running. The processor used thirty-six-bit words, and characters were six-bit; therefore six characters fit in a single word. It is conceivable that it might be more efficient to search for names that take only a single word to store than for names that take more than one word to store, but I don't know enough about the computer or implementation of LISP 1.5 to know if that's true.)
Even though the limit on names was thirty characters (the longest symbol names in standard Common Lisp are update-instance-for-different-class and update-instance-for-redefined-class, both thirty-five characters in length), only a few of the LISP 1.5 names are not abbreviated. Things like terpri ("terminate print") and even car and cdr ("contents of adress part of register" and "contents of decrement part of register"), which have stuck around until today, are pretty inscrutable if you don't know what they mean.
Thankfully the modern style is to limit abbreviations. Comparing the names that were introduced in Common Lisp versus those that have survived from LISP 1.5 (see the "Library" section below) shows a clear preference for good naming in Common Lisp, even at the risk of lengthy names. The multiple-value-bind operator could easily have been named mv-bind, but it wasn't.

Fundamental differences.

Truth values. Common Lisp has a single value considered to be false, which happens to be the same as the empty list. It can be represented either by the symbol nil or by (); either of these may be quoted with no difference in meaning. Anything else, when considered as a boolean, is true; however, there is a self-evaluating symbol, t, that traditionally is used as the truth value whenever there is no other more appropriate one to use.
In LISP 1.5, the situation was similar: Just like Common Lisp, nil or the empty list are false and everything else is true. But the symbol nil was used by programmers only as the empty list; another symbol, f, was used as the boolean false. It turns out that f is actually a constant whose value is nil. LISP 1.5 had a truth symbol t, like Common Lisp, but it wasn't self-evaluating. Instead, it was a constant whose permanent value was *t*, which was self-evaluating. The following code will set things up so that the LISP 1.5 constants work properly:
(defconstant *t* t) ; (eq *t* t) is true (defconstant f nil) 
Recall the practice in older Lisp code that was mentioned above of using nil in forms like (lambda nil ...) and (prog nil ...), where today we would probably use (). Perhaps this usage is related to the fact that nil represented an empty list more than it did a false value; or perhaps the fact that it seems so odd to us now is related to the fact that there is even less of a distinction between nil the empty list and nil the false value in Common Lisp (there is no separate f constant).
Function storage. In Common Lisp, when you define a function with defun, that definition gets stored somehow in the global environment. LISP 1.5 stores functions in a much simpler way: A function definition goes on the property list of the symbol naming it. The indicator under which the definition is stored is either expr or fexpr or subr or fsubr. The expr/fexpr indicators were used when the function was interpreted (written in Lisp); the subr/fsubr indicators were used when the function was compiled (or written in machine code). Functions can be referred to based on the property under which their definitions are stored; for example, if a function named f has a definition written in Lisp, we might say that "f is an expr."
When a function is interpreted, its lambda expression is what is stored. When a function is compiled or machine coded, a pointer to its address in memory is what is stored.
The choice between expr and fexpr and between subr and fsubr is based on evaluation. Functions that are exprs and subrs are evaluated normally; for example, an expr is effectively replaced by its lambda expression. But when an fexpr or an fsubr is to be processed, the arguments are not evaluated. Instead they are put in a list. The fexpr or fsubr definition is then passed that list and the current environment. The reason for the latter is so that the arguments can be selectively evaluated using eval (which took a second argument containing the environment in which evaluation is to occur). Here is an example of what the definition of an fexpr might look like, LISP 1.5 style. This function takes any number of arguments and prints them all, returning nil.
(LAMBDA (A E) (PROG () LOOP (PRINT (EVAL (CAR A) E)) (COND ((NULL (CDR A)) (RETURN NIL))) (SETQ A (CDR A)) (GO LOOP))) 
The "f" in "fexpr" and "fsubr" seems to stand for "form", since fexpr and fsubr functions got passed a whole form.
The top level: evalquote. In Common Lisp, the interpreter is usually available interactively in the form of a "Read-Evaluate-Print-Loop", for which a common abbreviation is "REPL". Its structure is exactly as you would expect from that name: Repeatedly read a form, evaluate it (using eval), and print the results. Note that this model is the same as top level file processing, except that the results of only the last form are printed, when it's done.
In LISP 1.5, the top level is not eval, but evalquote. Here is how you could implement evalquote in Common Lisp:
(defun evalquote (operator arguments) (eval (cons operator arguments))) 
LISP 1.5 programs commonly look like this (define takes a list of function definitions):
DEFINE (( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
which evalquote would process as though it had been written
(DEFINE ( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
Evaluation, scope, extent. Before further discussion, here the evaluator for LISP 1.5 as presented in Appendix B, translated from m-expressions to approximate Common Lisp syntax. This code won't run as it is, but it should give you an idea of how the LISP 1.5 interpreter worked.
(defun evalquote (function arguments) (if (atom function) (if (or (get function 'fexpr) (get function 'fsubr)) (eval (cons function arguments) nil)) (apply function arguments nil))) (defun apply (function arguments environment) (cond ((null function) nil) ((atom function) (let ((expr (get function 'expr)) (subr (get function 'subr))) (cond (expr (apply expr arguments environment)) (subr ; see below ) (t (apply (cdr (sassoc function environment (lambda () (error "A2")))) arguments environment))))) ((eq (car function 'label)) (apply (caddr function) arguments (cons (cons (cadr function) (caddr function)) arguments))) ((eq (car function) 'funarg) (apply (cadr function) arguments (caddr function))) ((eq (car function) 'lambda) (eval (caddr function) (nconc (pair (cadr function) arguments) environment))) (t (apply (eval function environment) arguments environment)))) (defun eval (form environment) (cond ((null form) nil) ((numberp form) form) ((atom form) (let ((apval (get atom 'apval))) (if apval (car apval) (cdr (sassoc form environment (lambda () (error "A8"))))))) ((eq (car form) 'quote) (cadr form)) ((eq (car form) 'function) (list 'funarg (cadr form) environment)) ((eq (car form) 'cond) (evcon (cdr form) environment)) ((atom (car form)) (let ((expr (get (car form) 'expr)) (fexpr (get (car form) 'fexpr)) (subr (get (car form) 'subr)) (fsubr (get (car form) 'fsubr))) (cond (expr (apply expr (evlis (cdr form) environment) environment)) (fexpr (apply fexpr (list (cdr form) environment) environment)) (subr ; see below ) (fsubr ; see below ) (t (eval (cons (cdr (sassoc (car form) environment (lambda () (error "A9")))) (cdr form)) environment))))) (t (apply (car form) (evlis (cdr form) environment) environment)))) (defun evcon (cond environment) (cond ((null cond) (error "A3")) ((eval (caar cond) environment) (eval (cadar cond) environment)) (t (evcon (cdr cond) environment)))) (defun evlis (list environment) (maplist (lambda (j) (eval (car j) environment)) list)) 
(The definition of evalquote earlier was a simplification to avoid the special case of special operators in it. LISP 1.5's apply can't handle special operators (which is also true of Common Lisp's apply). Hopefully the little white lie can be forgiven.)
There are several things to note about these definitions. First, it should be reiterated that they will not run in Common Lisp, for many reasons. Second, in evcon an error has been corrected; the original says in the consequent of the second branch (effectively)
(eval (cadar environment) environment) 
Now to address the "see below" comments. In the manual it describes the actions of the interpreter as calling a function called spread, which takes the arguments given in a Lisp function call and puts them into the machine registers expected with LISP 1.5's calling convention, and then executes an unconditional branch instruction after updating the value of a variable called $ALIST to the environment passed to eval or to apply. In the case of fsubr, instead of calling spread, since the function will always get two arguments, it places them directly in the registers.
You will note that apply is considered to be a part of the evaluator, while in Common Lisp apply and eval are quite different. Here it takes an environment as its final argument, just like eval. This fact highlights an incredibly important difference between LISP 1.5 and Common Lisp: When a function is executed in LISP 1.5, it is run in the environment of the function calling it. In contrast, Common Lisp creates a new lexical environment whenever a function is called. To exemplify the differences, the following code, if Common Lisp were evaluated like LISP 1.5, would be valid:
(defun weird (a b) (other-weird 5)) (defun other-weird (n) (+ a b n)) 
In Common Lisp, the function weird creates a lexical environment with two variables (the parameters a and b), which have lexical scope and indefinite extent. Since the body of other-weird is not lexically within the form that binds a and b, trying to make reference to those variables is incorrect. You can thwart Common Lisp's lexical scoping by declaring those variables to have indefinite scope:
(defun weird (a b) (declare (special a b)) (other-weird 5)) (defun other-weird (n) (declare (special a b)) (+ a b n)) 
The special declaration tells the implementation that the variables a and b are to have indefinite scope and dynamic extent.
Let's talk now about the funarg branch of apply. The function/funarg device was introduced some time in the sixties in an attempt to solve the scoping problem exemplified by the following problematic definition (using Common Lisp syntax):
(defun testr (x p f u) (cond ((funcall p x) (funcall f x)) ((atom x) (funcall u)) (t (testr (cdr x) p f (lambda () (testr (car x) p f u)))))) 
This function is taken from page 11 of John McCarthy's History of Lisp.
The only problematic part is the (car x) in the lambda in the final branch. The LISP 1.5 evaluator does little more than textual substitution when applying functions; therefore (car x) will refer to whatever x is currently bound whenever the function (lambda expression) is applied, not when it is written.
How do you fix this issue? The solution employed in LISP 1.5 was to capture the environment present when the function expression is written, using the function operator. When the evaluator encounters a form that looks like (function f), it converts it into (funarg f environment), where environment is the current environment during that call to eval. Then when apply gets a funarg form, it applies the function in the environment stored in the funarg form instead of the environment passed to apply.
Something interesting arises as a consequence of how the evaluator works. Common Lisp, as is well known, has two separate name spaces for functions and for variables. If a Common Lisp implementation encounters
(lambda (f x) (f x)) 
the result is not a function applying one of its arguments to its other argument, but rather a function applying a function named f to its second argument. You have to use an operator like funcall or apply to use the functional value of the f parameter. If there is no function named f, then you will get an error. In contrast, LISP 1.5 will eventually find the parameter f and apply its functional value, if there isn't a function named f—but it will check for a function definition first. If a Lisp dialect that has a single name space is called a "Lisp-1", and one that has two name spaces is called a "Lisp-2", then I guess you could call LISP 1.5 a "Lisp-1.5"!
How can we deal with indefinite scope when trying to get LISP 1.5 programs to run in Common Lisp? Well, with any luck it won't matter; ideally the program does not have any references to variables that would be out of scope in Common Lisp. However, if there are such references, there is a fairly simple fix: Add special declarations everywhere. For example, say that we have the following (contrived) program, in which define has been translated into defun forms to make it simpler to deal with:
(defun f (x) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (h (* b a))) (defun h (i) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
The result of calling p should be 10/63. To make it work, add special declarations wherever necessary:
(defun f (x) (declare (special a b)) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (declare (special a b l)) (h (* b a))) (defun h (i) (declare (special a b l i)) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (declare (special a b i)) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
Be careful about the placement of the declarations. It is required that the one in p be inside the prog, since that is where the variables are bound; putting it at the beginning (i.e., before the prog) would do nothing because the prog would create new lexical bindings.
This method is not optimal, since it really doesn't help too much with understanding how the code works (although being able to see which variables are free and which are bound, by looking at the declarations, is very helpful). A better way would be to factor out the variables used among several functions (as long as you are sure that it is used in only those functions) and put them in a let. Doing that is more difficult than using global variables, but it leads to code that is easier to reason about. Of course, if a variable is used in a large number of functions, it might well be a better choice to create a global variable with defvar or defparameter.
Not all LISP 1.5 code is as bad as that example!
Join us next time as we look at the LISP 1.5 library. In the future, I think I'll make some posts talking about getting specific programs running. If you see any errors, please let me know.
submitted by kushcomabemybedtime to lisp [link] [comments]

Comprehensive Guide for getting into Home Recording

I'm going to borrow from a few sources and do my best to make this cohesive, but this question comes up a lot. I thought we had a comprehensive guide, but it doesn't appear so. In the absence of this, I feel that a lot of you could use a simple place to go for some basics on recording. There are a couple of great resources online already on some drumming forums, but I don't think they will be around forever.
Some background on myself - I have been drumming a long time. During that time, home recording has gone from using a cassette deck to having a full blown studio at your finger tips. The technology in the last 15 years has gotten so good it really is incredible. When I was trying to decide what I wanted to do with my life, I decided to go to school for audio engineering in a world-class studio. During this time I had access to the studio and was able to assist with engineering on several projects. This was awesome, and I came out with a working knowledge of SIGNAL CHAIN, how audio works in the digital realm, how microphones work, studio design, etc. Can I answer your questions? Yes.

First up: Signal Chain! This is the basic building block of recording. Ever seen a "I have this plugged in but am getting no sound!" thread? Yeah, signal chain.

A "Signal Chain" is the path your audio follows, from sound source, to the recording device, and back out of your monitors (speakers to you normies).
A typical complete signal chain might go something like this:
1] instrument/sound source 2] Microphone/TransducePickup 3] Cable 4] Mic Preamp/DI Box 5] Analog-to-Digital Converter 6] Digital transmission medium[digital data get recoded for usb or FW transfer] 7] Digital recording Device 8] DSP and Digital summing/playback engine 9] Digital-to-Analog Converter 10] Analog output stage[line outputs and output gain/volume control] 11] Monitors/Playback device[headphones/other transducers]
Important Terms, Definitions, and explanations (this will be where the "core" information is):
1] AD Conversion: the process by which the electrical signal is "converted" to a stream of digital code[binary, 1 and 0]. This is accomplished, basically, by taking digital pictures of the audio...and this is known as the "sampling rate/frequency" The number of "pictures" determines the frequency. So the CD standard of 44.1k is 44,100 "pictures" per second of digital code that represents the electrical "wave" of audio. It should be noted that in order to reproduce a frequency accuratly, the sampling rate must be TWICE that of the desired frequency (See: Nyquist-Shannon Theorem). So, a 44.1 digital audio device can, in fact, only record frequencies as high as 22.05khz, and in the real world, the actual upper frequency limit is lower, because the AD device employs a LOW-PASS filter to protect the circuitry from distortion and digital errors called "ALIASING." Confused yet? Don't worry, there's more... We haven't even talked about Bit depth! There are 2 settings for recording digitally: Sample Rate and Bit Depth. Sample rate, as stated above, determines the frequencies captured, however bit depth is used to get a better picture of the sample. Higher bit depth = more accurate sound wave representation. More on this here. Generally speaking, I record at 92KHz/24 bit depth. This makes huge files, but gets really accurate audio. Why does it make huge files? Well, if you are sampling 92,000 times per second, you are taking each sample and applying 24 bits to that, multiply it out and you get 92,000*24 = 2,208,000 bits per second or roughly 0.26MB per second for ONE TRACK. If that track is 5 minutes long, that is a file that is 78.96MB in size. Now lets say you used 8 inputs on an interface, that is, in total, 631.7MB of data. Wow, that escalates quick, right? There is something else to note as well here: Your CPU has to calculate this. So the amount of calculations it needs to perform for this same scenario is ~17.7 million calculations PER SECOND. This is why CPU speed and RAM is super important when recording digitally.
2] DA conversion: the process by which the digital code (the computer representation of a sound wave) is transformed back into electrcal energy in the proper shape. In a oversimplified explanation, the code is measured and the output of the convertor reflects the value of the code by changing voltage. Think of a sound wave on a grid: Frequency would represent the X axis (the horizontal axis)... but there is a vertical axis too. This is called AMPLITUDE or how much energy the wave is generating. People refer to this as how 'loud' a sound is, but that's not entirely correct. You can have a high amplitude wave that is played at a quiet volume. It's important to distinguish the two. How loud a sound is can be controlled by the volume on a speaker or transducer. But that has no impact on how much amplitude the sound wave has in the digital space or "in the wire" on its way to the transducer. So don't get hung up on how "loud" a waveform is, it is how much amplitude it has when talking about it "in the box" or before it gets to the speakeheadphone/whatever.
3] Cables: An often overlooked expense and tool, cables can in fact, make or break your recording. The multitudes of types of cable are determined by the connector, the gauge(thickness), shielding, type of conductor, etc... Just some bullet points on cables:
- Always get the highest quality cabling you can afford. Low quality cables often employ shielding that doesnt efectively protect against AC hums(60 cycle hum), RF interference (causing your cable to act as a gigantic AM/CB radio antenna), or grounding noise introduced by other components in your system. - The way cables are coiled and treated can determine their lifespan and effectiveness. A kinked cable can mean a broken shield, again, causing noise problems. - The standard in the USA for wiring an XLR(standard microphone) cable is: PIN 1= Cold/-, PIN 2= Hot/+, PIN 3=Ground/shield. Pin 3 carries phantom power, so it is important that the shield of your cables be intact and in good condition if you want to use your mic cables without any problems. - Cables for LINE LEVEL and HI-Z(instrument level) gear are not the same! - Line Level Gear, weather professional or consumer, should generally be used with balanced cables (on a 1/4" connector, it will have 3 sections and is commonly known as TRS -or- TipRingSleeve). A balanced 1/4" is essentially the same as a microphone cable, and in fact, most Professional gear with balanced line inputs and outputs will have XLR connectors instead of 1/4" connectors. - Hi-Z cable for instruments (guitars, basses, keyboards, or anything with a pickup) is UNBALANCED, and should be so. The introduction of a balanced cable can cause electricity to be sent backwards into a guitar and shock the guitar player. You may want this to happen, but your gear doesn't. There is some danger here as well, especially on stage, where the voltage CAN BE LETHAL. When running a guitabass/keyboard "Direct" into your interface, soundcard, or recording device, you should ALWAYS use a "DIRECT BOX", which uses a transformer to isolate and balance the the signal or you can use any input on the interface designated as a "Instrument" or "Hi-Z" input. It also changes some electrical properties, resulting in a LINE LEVEL output (it amplifies it from instrument level to line level).
4] Digital Data Transmissions: This includes S/PDIF, AES/EBU, ADAT, MADI. I'm gonna give a brief overview of this stuff, since its unlikely that alot of you will ever really have to think about it: - SDPIF= Sony Phillips Digital Interface Format. using RCA or TOSLINK connectors, this is a digital protocol that carries 3 streams of information. Digital audio Left, Digital Audio Right, and CLOCK. SPDIF generally supports 48khz/20bit information, though some modern devices can support up to 24bits, and up to 88.2khz. SPDIF is the consumer format of AES/EBU - AES/EBU= Audio Engineering Society/European Breadcasters Union Digital protocol uses a special type of cable often terminated with XLR connectors to transmit 2 channels of Digital Audio. AES/EBU is found mostly on expensive professional digital gear. - ADAT= the Alesis Digital Audio Tape was introduced in 1991, and was the first casette based system capable of recording 8 channels of digital audio onto a single cartridge(a SUPER-VHS tape, same one used by high quality VCR's). Enough of the history, its not so important because we are talking about ADAT-LIGHTPIPE Protocol, which is a digital transmission protocol that uses fiberoptic cable and devices to send up to 8 channels of digital audio simultaneously and in sync. ADAT-Lightpipe supports up to 48khz sample rates. This is how people expand the number of inputs by chaining interfaces. - MADI is something you will almost never encounter. It is a protocol that allows up to 64 channels of digital audio to be transmitted over a single cable that is terminated by BNC connectors. Im just telling you it exists so in case you ever encounter a digital snake that doesnt use Gigabit Ethernet, you will know whats going on.
digital transmission specs: SPDIF -> clock->2Ch->RCA cable(consumer) ADAT-Lightpipe->clock->8Ch->Toslink(semi-pro) SPDIF-OPTICAL->clock->2Ch->Toslink(consumer) AES/EBU->clock->2Ch->XLR(Pro) TDIF->clock->8Ch->DSub(Semi-Pro) ______________ MADI->no clock->64Ch->BNC{rare except in large scale pofessional apps} SDIF-II->no clock->24Ch->DSub{rare!} AES/EBU-13->no clock->24Ch->DSub
5] MICROPHONES: There are many types of microphones, and several names for each type. The type of microphone doesn't equate to the polar pattern of the microphone. There are a few common polar patterns in microphones, but there are also several more that are less common. These are the main types- Omni-Directional, Figure 8 (bi-directional), Cardioid, Super Cardioid, Hyper Cardioid, Shotgun. Some light reading.... Now for the types of microphones: - Dynamic Microphones utilize polarized magnets to convert acoustical energy into electrical energy. there are 2 types of dynamic microphones: 1) Moving Coil microphones are the most common type of microphone made. They are also durable, and capable of handling VERY HIGH SPL (sound pressure levels). 2) Ribbon microphones are rare except in professional recording studios. Ribbon microphones are also incredibly fragile. NEVER EVER USE PHANTOM POWER WITH A RIBBON MICROPHONE, IT WILL DIE (unless it specifically requires it, but I've only ever seen this on one Ribbon microphone ever). Sometimes it might even smoke or shoot out a few sparks; applying phantom power to a Ribbon Microphone will literally cause the ribbon, which is normally made from Aluminum, to MELT. Also, windblasts and plosives can rip the ribbon, so these microphones are not suitible for things like horns, woodwinds, vocals, kick drums, or anything that "pushes air." There have been some advances in Ribbon microphones and they are getting to be more common, but they are still super fragile and you have to READ THE MANUAL CAREFULLY to avoid a $1k+ mistake. - CondenseCapacitor Microphones use an electrostatic charge to convert acoustical energy into electrical energy. The movement of the diaphragm(often metal coated mylar) toward a ceramic "backplate" causes a fluctuation in the charge, which is then amplified inside the microphone and output as an electrical signal. Condenser microphones usually use phantom power to charge the capacitors' and backplate in order to maintain the electrostatic charge. There are several types of condenser microphones: 1) Tube Condenser Microphones: historically, this type of microphone has been used in studios since the 1940s, and has been refined and redesigned hundreds, if not thousands of times. Some of the "best sounding" and most desired microphones EVER MADE are Tube Condenser microphones from the 50's and 60's. These vintage microphones, in good condition, with the original TUBES can sell for hundreds of thousands of dollars. Tube mics are known for sounding "full", "warm", and having a particular character, depending on the exact microphone. No 2 tubes mics, even of the same model, will sound the same. Similar, but not the same. Tube mics have their own power supplies, which are not interchangeable to different models. Each tube mic is a different design, and therefore, has different power requirements. 2) FET Condenser microphones: FET stands for "Field Effect Transistor" and the technology allowed condenser microphones to be miniturized. Take for example, the SHURE beta98s/d, which is a minicondenser microphone. FET technology is generally more transparant than tube technology, but can sometimes sound "harsh" or "sterile". 3) Electret Condenser Microphones are a condenser microphone that has a permanent charge, and therefore, does not require phantom power; however, the charge is not truly permanent, and these mics often use AA or 9V batteries, either inside the mic, or on a beltpack. These are less common.
Other important things to know about microphones:
- Pads, Rolloffs, etc: Some mics have switches or rotating collars that notate certain things. Most commonly, high pass filters/lowcut filters, or attenuation pads. 1) A HP/LC Filter does exactly what you might think: Removes low frequency content from the signal at a set frequency and slope. Some microphones allow you to switch the rolloff frequency. Common rolloff frequencies are 75hz, 80hz, 100hz, 120hz, 125hz, and 250hz. 2) A pad in this example is a switch that lowers the output of the microphone directly after the capsule to prevent overloading the input of a microphone preamplifier. You might be asking: How is that possible? Some microphones put out a VERY HIGH SIGNAL LEVEL, sometimes about line level(-10/+4dbu), mic level is generally accepted to start at -75dbu and continues increasing until it becomes line level in voltage. It should be noted that linel level signals are normally of a different impedance than mic level signals, which is determined by the gear. An example for this would be: I mic the top of a snare drum with a large diaphragm condenser mic (solid state mic, not tube) that is capable of handling very high SPLs (sound pressure levels). When the snare drum is played, the input of the mic preamp clips (distorts), even with the gain turned all the way down. To combat this, I would use a pad with enough attenuation to lower the signal into the proper range of input (-60db to -40 db). In general, it is accepted to use a pad with only as much attentuation as you need, plus a small margin of error for extra “headroom”. What this means is that if you use a 20db pad where you only need a 10db pad, you will then have to add an additional 10db of gain to achieve a desireable signal level. This can cause problems, as not all pads sound good, or even transparent, and can color and affect your signal in sometimes unwanted ways that are best left unamplified. - Other mic tips/info: 1) when recording vocals, you should always use a popfilter. A pop filter mounted on a gooseneck is generally more effective than a windscreen made of foam that slips over the microphone. The foam type often kill the highfrequency response, alter the polar pattern, and can introduce non-linear polarity problems(part of the frequency spectrum will be out of phase.) If you don't have a pop filter or don't want to spend on one, buy or obtain a hoop of some kind, buy some cheap panty-hose and stretch it over the hoop to build your own pop filter. 2) Terms Related to mics: - Plosives: “B”, “D”, “F”, “G”, “J”, “P”, “T” hard consonants and other vocal sounds that cause windblasts. These are responsible for a low frequency pop that can severly distort the diaphragm of the microphone, or cause a strange inconsistency of tonality by causing a short term proximity effect.
- Proximity effect: An exponential increase in low frequency response causes by having a microphone excessivly close to a sound. This can be cause by either the force of the air moving actually causes the microphone’s diaphragm to move and sometimes distort, usually on vocalists or buy the buildup of low frequency soundwaves due to off-axis cancellation ports. You cannot get proximity effect on an omnidirectional microphone. With some practice, you can use proximity effect to your advantage, or as an effect. For example, if you are recording someone whispering and it sounds thin or weak and irritating due to the intenese high mid and high frequency content, get the person very close to a cardioid microphone with two popfilters, back to back approx 1/2”-1” away from the mic and set your gain carefully, and you can achieve a very intimite recording of whispering. In a different scenario, you can place a mic inside of a kick drum between 1”-3” away from the inner shell, angled up and at the point of impact, and towards the floor tom. This usually captures a huge low end, and the sympathetic vibration of the floor tom on the kick drum hits, but retains a clarity of attack without being distorted by the SPL of the drum and without capturing unplesant low-mid resonation of the kick drum head and shell that is common directly in the middle of the shell.
6) Wave Envelope: The envelope is the graphical representation of a sound wave commonly found in a DAW. There are 4 parts to this: Attack, Decay, Sustain, Release: 1) Attack is how quickly the sound reaches its peak amplitude; 2) Decay is the time it takes to reach the sustain level; 3) Sustain how long a sound remains at a certain level (think of striking a tom, the initial smack is attack, then it decays to the resonance of the tom, how long it resonates is the sustain); 4) Release is the amount of time before the sustain stops. This is particularly important as these are also the settings on a common piece of gear called a Compressor! Understanding the envelope of a sound is key to learning how to maniuplate it.
7) Phase Cancellation: This is one of the most important concepts in home recording, especially when looking at drums. I'm putting it in this section because it matters so much. Phase Cancellation is what occurs when the same frequencies occur at different times. To put it simply, frequency amplitudes are additive - meaning if you have 2 sound waves of the same frequency, one amplitude is +4 and the other is +2, the way we percieve sound is that the frequency is +6. But a sound wave has a positive and negative amplitude as it travels (like a wave in the ocean with a peak and a swell). If the frequency then has two sources and it is 180 degrees out of phase, that means one wave is at +4 while the other is at -4. This sums to 0, or cancels out the wave. Effectively, you would hear silence. This is why micing techniques are so important, but we'll get into that later. I wanted this term at the top, and will likely mention it again.

Next we can look at the different types of options to actually record your sound!

1) Handheld/All in one/Field Recorders: I don't know if portable cassette tape recorders are still around, but that's an example of one. These are (or used to) be very popular with journalists because they were pretty decent at capturing speech. They do not fare too well with music though. Not too long ago, we saw the emergence of the digital field recorder. These are really nifty little devices. They come in many shapes, sizes and colors, and can be very affordable. They run on batteries, and have built-in microphones, and record digitally onto SD cards or harddiscs. The more simple ones have a pair of built-in condenser microphones, which may or may not be adjustable, and record onto an SD-card. They start around $99 (or less if you don't mind buying refurbished). You turn it on, record, connect the device itself or the SD card to your computer, transfer the file(s) and there is your recording! An entry-level example is the Tascam DR-05. It costs $99. It has two built in omni-directional mics, comes with a 2GB microSD card and runs on two AA batteries. It can record in different formats, the highest being 24-bit 96KHz Broadcast WAV, which is higher than DVD quality! You can also choose to record as an MP3 (32-320kbps) if you need to save space on the SD card or if you're simply going to record a speech/conference or upload it on the web later on. It's got a headphone jack and even small built-in speakers. It can be mounted onto a tripod. And it's about the size of a cell phone. The next step up (although there are of course many options that are price and feature-wise inbetween this one and the last) is a beefier device like the Zoom H4n. It's got all the same features as the Tascam DR-05 and more! It has two adjustable built-in cardioid condenser mics in an XY configuration (you can adjust the angle from a 90-120 degree spread). On the bottom of the device, there are two XLR inputs with preamps. With those, you can expand your recording possibilities with two external microphones. The preamps can send phantom power, so you can even use very nice studio mics. All 4 channels will be recorded independantly, so you can pop them onto your computer later and mix them with software. This device can also act as a USB interface, so instead of just using it as a field recorder, you can connect it directly to your computer or to a DSLR camera for HD filming. My new recommendation for this category is actually the Yamaha EAD10. It really is the best all-in-one solution for anyone that wants to record their kit audio with a great sound. It sports a kick drum trigger (mounts to the rim of the kick) with an x-y pattern set of microphones to pick up the rest of the kit sound. It also has on-board effects, lots of software integration options and smart features through its app. It really is a great solution for anyone who wants to record without reading this guide.
The TL;DR of this guide is - if it seems like too much, buy the Yamaha EAD10 as a simple but effective recording solution for your kit.

2) USB Microphones: There are actually mics that you an plug in directly to your computer via USB. The mics themselves are their own audio interfaces. These mics come in many shapes and sizes, and offer affordable solutions for basic home recording. You can record using a DAW or even something simple like the stock windows sound recorder program that's in the acessories folder of my Windows operating system. The Blue Snowflake is very affordable at $59. It can stand alone or you can attach it to your laptop or your flat screen monitor. It can record up to 44.1kHz, 16-bit WAV audio, which is CD quality. It's a condenser mic with a directional cardioid pickup pattern and has a full frequency response - from 35Hz-20kHz. It probably won't blow you away, but it's a big departure from your average built-in laptop, webcam, headset or desktop microphone. The Audio Technica AT2020 USB is a USB version of their popular AT2020 condenser microphone. At $100 it costs a little more than the regular version. The AT2020 is one of the finest mics in its price range. It's got a very clear sound and it can handle loud volumes. Other companies like Shure and Samson also offer USB versions of some of their studio mics. The AT2020 USB also records up to CD-quality audio and comes with a little desktop tripod. The MXL USB.009 mic is an all-out USB microphone. It features a 1 inch large-diaphragm condenser capsule and can record up to 24-bit 96kHz WAV audio. You can plug your headphones right into the mic (remember, it is its own audio interface) so you can monitor your recordings with no latency, as opposed to doing so with your computer. Switches on the mic control the gain and can blend the mic channel with playback audio. Cost: $399. If you already have a mic, or you don't want to be stuck with just a USB mic, you can purcase a USB converter for your existing microphone. Here is a great review of four of them.
3) Audio Recording Interfaces: You've done some reading up on this stuff... now you are lost. Welcome to the wide, wide world of Audio Interfaces. These come in all different shapes and sizes, features, sampling rates, bit depths, inputs, outputs, you name it. Welcome to the ocean, let's try to help you find land.
- An audio interface, as far as your computer is concerned, is an external sound card. It has audio inputs, such as a microphone preamp and outputs which connect to other audio devices or to headphones or speakers. The modern day recording "rig" is based around a computer, and to get the sound onto your computer, an interface is necessary. All computers have a sound card of some sort, but these have very low quality A/D Converters (analog to digital) and were not designed with any kind of sophisticated audio recording in mind, so for us they are useless and a dedicated audio interface must come into play.
- There are hundreds of interfaces out there. Most commonly they connect to a computer via USB or Firewire. There are also PCI and PCI Express-based interfaces for desktop computers. The most simple interfaces can record one channel via USB, while others can record up to 30 via firewire! All of the connection types into the computer have their advantages and drawbacks. The chances are, you are looking at USB, Firewire, or Thunderbolt. As far as speeds, most interfaces are in the same realm as far as speed is concerned but thunderbolt is a faster data transfer rate. There are some differences in terms of CPU load. Conflict handling (when packages collide) is handled differently. USB sends conflict resolution to the CPU, Firewire handles it internally, Thunderbolt, from what I could find, sends it to the CPU as well. For most applications, none of them are going to be superior from a home-recording standpoint. When you get up to 16/24 channels in/out simultaneously, it's going to matter a lot more.
- There are a number of things to consider when choosing an audio interface. First off your budget, number of channels you'd like to be able to record simultaneously, your monitoring system, your computer and operating system and your applications. Regarding budget, you have to get real. $500 is not going to get you a rig with the ability to multi-track a drum set covered in mics. Not even close! You might get an interface with 8 channels for that much, but you have to factor in the cost of everything, including mics, cables, stands, monitors/headphones, software, etc... Considerations: Stereo Recording or Multi-Track Recording? Stereo Recording is recording two tracks: A left and right channel, which reflects most audio playback systems. This doesn't necessarily mean you are simply recording with two mics, it means that what your rig is recording onto your computer is a single stereo track. You could be recording a 5-piece band with 16 mics/channels, but if you're recording in stereo, all you're getting is a summation of those 16 tracks. This means that in your recording software, you won't be able to manipulate any of those channels independantly after you recorded them. If the rack tom mic wasn't turned up loud enough, or you want to mute the guitars, you can't do that, because all you have is a stereo track of everything. It's up to you to get your levels and balance and tone right before you hit record. If you are only using two mics or lines, then you will have individual control over each mic/line after recording. Commonly, you can find 2 input interfaces and use a sub-mixer taking the left/right outputs and pluging those into each channel of the interface. Some mixers will output a stereo pair into a computer as an interface, such as the Allen&Heath ZED16. If you want full control over every single input, you need to multi-track. Each mic or line that you are recording with will get it's own track in your DAW software, which you can edit and process after the fact. This gives you a lot of control over a recording, and opens up many mixing options, and also many more issues. Interfaces that facilitate multitracking include Presonus FireStudio, Focusrite Scarlett interfaces, etc. There are some mixers that are also interfaces, such as the Presonus StudioLive 16, but these are very expensive. There are core-card interfaces as well, these will plug in directly to your motherboard via PCI or PCI-Express slots. Protools HD is a core-card interface and requires more hardware than just the card to work. I would recommend steering clear of these until you have a firm grasp of signal chain and digital audio, as there are more affordable solutions that will yield similar results in a home-environment.

DAW - Digital Audio Workstation

I've talked a lot about theory, hardware, signal chain, etc... but we need a way to interpret this data. First off what does a DAW do? Some refer to them as DAE's (Digital Audio Editors). You could call it a virtual mixing board , however that isn't entirely correct. DAWs allow you to record, control, mix and manipulate independant audio signals. You can change their volume, add effects, splice and dice tracks, combine recorded audio with MIDI-generated audio, record MIDI tracks and much much more. In the old days, when studios were based around large consoles, the actual audio needed to be recorded onto some kind of medium - analog tape. The audio signals passed through the boards, and were printed onto the tape, and the tape decks were used to play back the audio, and any cutting, overdubbing etc. had to be done physically on the tape. With a DAW, your audio is converted into 1's and 0's through the converters on your interface when you record, and so computers and their harddiscs have largely taken the place of reel-to-reel machines and analog tape.
Here is a list of commonly used DAWs in alphabetical order: ACID Pro Apple Logic Cakewalk SONAR Digital Performer FL (Fruity Loops) Studio (only versions 8 and higher can actually record Audio I believe) GarageBand PreSonus Studio One Pro Tools REAPER Propellerhead Reason (version 6 has combined Reason and Record into one software, so it now is a full audio DAW. Earlier versions of Reason are MIDI based and don't record audio) Propellerhead Record (see above) Steinberg Cubase Steinberg Nuendo
There are of course many more, but these are the main contenders. [Note that not all DAWs actually have audio recording capabilities (All the ones I listed do, because this thread is about audio recording), because many of them are designed for applications like MIDI composing, looping, etc. Some are relatively new, others have been around for a while, and have undergone many updates and transformations. Most have different versions, that cater to different types of recording communities, such as home recording/consumer or professional.
That's a whole lot of choices. You have to do a lot of research to understand what each one offers, what limitations they may have etc... Logic, Garageband and Digital Performer for instance are Mac-only. ACID Pro, FL Studio and SONAR will only run on Windows machines. Garageband is free and is even pre-installed on every Mac computer. Most other DAWs cost something.
Reaper is a standout. A non-commercial license only costs $60. Other DAWs often come bundled with interfaces, such as ProTools MP with M-Audio interfaces, Steinberg Cubase LE with Lexicon Interfaces, Studio One with Presonus Interfaces etc. Reaper is a full function, professional, affordable DAW with a tremendous community behind it. It's my recommendation for everyone, and comes with a free trial. It is universally compatible and not hardware-bound.
You of course don't have to purchase a bundle. Your research might yield that a particular interface will suit your needs well, but the software that the same company offers or even bundles isn't that hot. As a consumer you have a plethora of software and hardware manufacturers competing for your business and there is no shortage of choice. One thing to think about though is compatability and customer support. With some exceptions, technically you can run most DAWs with most interfaces. But again, don't just assume this, do your research! Also, some DAWs will run smoother on certain interfaces, and might experience problems on others. It's not a bad thing to assume that if you purchase the software and hardware from the same company, they're at least somewhat optimized for eachother. In fact, ProTools, until recently would only run on Digidesign (now AVID) and M-Audio interfaces. While many folks didn't like being limited to their hardware choices to run ProTools, a lot of users didn't mind, because I think that at least in part it made ProTools run smoother for everyone, and if you did have a problem, you only had to call up one company. There are many documented cases where consumers with software and hardware from different companies get the runaround:
Software Company X: "It's a hardware issue, call Hardware Company Z". Hardware Company Z: "It's a software issue, call Software Company X".
Another thing to research is the different versions of softwares. Many of them have different versions at different pricepoints, such as entry-level or student versions all the way up to versions catering to the pros. Cheaper versions come with limitations, whether it be a maximum number of audio tracks you can run simultaneously, plug-ins available or supported Plug-In formats and lack of other features that the upper versions have. Some Pro versions might require you to run certain kinds of hardware. I don't have time nor the will to do research on individual DAW's, so if any of you want to make a comparison of different versions of a specific DAW, be my guest! In the end, like I keep stressing - we each have to do our own research.
A big thing about the DAW that it is important to note is this: Your signal chain is your DAW. It is the digital representation of that chain and it is important to understand it in order to properly use that DAW. It is how you route the signal from one spot to another, how you move it through a sidechain compressor or bus the drums into the main fader. It is a digital representation of a large-format recording console, and if you don't understand how the signal gets from the sound source to your monitor (speaker), you're going to have a bad time.

Playback - Monitors are not just for looking at!

I've mentioned monitors several times and wanted to touch on these quickly: Monitors are whatever you are using to listen to the sound. These can be headphones, powered speakers, unpowered speakers, etc. The key thing here is that they are accurate. You want a good depth of field, you want as wide a frequency response as you can get, and you want NEARFIELD monitors. Unless you are working with a space that can put the monitor 8' away from you, 6" is really the biggest speaker size you need. At that point, nearfield monitors will reproduce the audio frequency range faithfully for you. There are many options here, closed back headphones, open back headphones, studio monitors powered, and unpowered (require a separate poweramp to drive the monitor). For headphones, I recommend AKG K271, K872, Sennheiser HD280 Pro, etc. There are many options, but if mixing on headphones I recommend spending some good money on a set. For Powered Monitors, there's really only one choice I recommend: Kali Audio LP-6 monitors. They are, dollar for dollar, the best monitors you can buy for a home studio, period. These things contend with Genelecs and cost a quarter of the price. Yes, they still cost a bit, but if you're going to invest, invest wisely. I don't recommend unpowered monitors, as if you skimp on the poweramp they lose all the advantages you gain with monitors. Just get the powered monitors if you are opting for not headphones.

Drum Mic'ing Guide, I'm not going to re-create the wheel.


That's all for now, this has taken some time to put together (a couple hourse now). I can answer other questions as they pop up. I used a few sources for the information, most notably some well-put together sections on the Pearl Drummers Forum in the recording section. I know a couple of the users are no longer active there, but if you see this and think "Hey, he ripped me off!", you're right, and thanks for allowing me to rip you off!

A couple other tips that I've come across for home recording:
You need to manage your gain/levels when recording. Digital is NOT analog! What does this mean? You should be PEAKING (the loudest the signal gets) around -12dB to -15dB on your meters. Any hotter than that and you are overdriving your digital signal processors.
What sound level should my master bus be at for Youtube?
Bass Traps 101
Sound Proofing 101
submitted by M3lllvar to drums [link] [comments]

Dzelina - Language of the Rivers

Dzelina is my second serious attempt at a naturalistic conlang, artificially evolved from the first. Please offer criticisms, advice etc. Thank you for reading!

Introduction and History

Dzelina is the language spoken by the Dzeli, some 14 million people inhabiting most of the northern part of the continent of Siya. Three major cities of the Dzeli people; Saresi, Rekõne and Kadzara have been continuously inhabited for over 2700 years, and the Dzeli capital, Retofa, for 1500 years. These four cities lie on rivers in a Mediterranean climate, and have been the historic cradle of human civilisation. There is a large mountain range running east to west in the north and a stretch of land above it which is also inhabited by humans. This separation has created the two distinct ethnic groups of the Dzeli, the Northern and Southern, as well as a small group of Torõnans who inhabit a tropical island north of the coast.
These four cities have been ruled variously as independent city-states and larger empires, as well as foreign occupation from bears in the south and wolves in the west. In the year 3370 a theocratic government based in the Monastery of Renya (one of the nine deities of the Dzeli religion) liberated the country from bear control. They have ruled all human-inhabited lands since then, and fostered a golden age of art, wealth and science. The Dzeli is the most powerful and richest nation in the known world and arguably on the eve of an agricultural revolution due to developments in industrial techniques.
In previous centuries, religion and culture was disparate and often unique depending on region. With the theocratic government of Renya a unified and sectarian religion has been enforced throughout human populations. The nation has become politically centralised in the city of Retofa, which is ruled by a Voice who elects their successor from the students of one of the ten monasteries scattered across the land. This practice originates from succession customs in the Monastery of Renya.
The Dzelina language is an isolate, although the two dialects of Northern and Southern Dzelina are slowly diverging over time. The writing system of Dzelina was created in c. 3200, while the Dzelina detailed here is spoken c. 4200. The writing system has not been updated to account for changes in the language, and so there are significant discrepancies between the written abugida and what sounds are signified. However, Dzelina has evolved slowly due to its isolation.

Linguistic Changes from c. 3200 (Kaze/Proto-Dzelina) to c. 4200 (Modern Dzelina)
Phonology:
- The /ɒ/ vowel raised into either /o/ or /ʌ/. Some new consonants were introduced mostly due to contact with the wolf language and slight sound changes.
- The three ejectives come from extensive contact with the wolves.
- /b/ and /g/ were entirely devoiced, /d/ partially devoiced.
- Rhythm type is now trochaic rather than undetermined.
- There are various sound changes throughout the grammar and vocabulary
- Nasalisation has begun on /a/ and /o/
- Diphthongisation has begun, creating /ei/ and /ai/
Grammar:
- Dzelina is now predominantly fusional. Over approximately one thousand years this change occurred, with most of the agglutinative elements being lost.
- The two-case system was lost because of the aphaeresis of /p/ and /f/, however some remnants still exist among pronouns. The case system was falling out of usage anyway, as it was mainly only used during formal or scientific speech.
- The new morphosyntactic alignment is a result of the loss of case, but the personal pronouns still adopt nominative-accusative alignment.
- The perfective aspect was integrated into tense after aphaeresis of /f/.
- Agglutinative Realis/Irrealis particles were lost and now their meaning is fused into the modalities. There is no corresponding sound change so it is likely this occurred simply as a result of the general fusional direction of Dzelina.
- The negative now has a particle which must be used after the verb root. This likely arose from the loss of the agglutinative realis/irrealis markers, where the plural irrealis marker was retained on the negative mood. (Despite the fact that the negative mood was realis)
- There are new plural masculine and feminine personal pronouns which have arisen arbitrarily.
- The accusative demonstrative and interrogative pronouns were lost as a result of the collapse of the case system. nowe lost -ow-, no proposed reasons.
- There is no case agreement on verbs, reasons above.
- There are now two possessive classes. This is a result of several sound changes. They have formed two classes, alienable and inalienable possession. The change appears arbitrary.
- The Passive Construction also relates singular and plural meanings. This is likely because of the raising of /ɒ/ to /o/. This meant that it resembled the verb conjugation for singular and plural, so speakers naturally formed the singular and plural forms of the Passive particles.

Phonology

Consonants
Bilabial Labiodental Alveolar Palatal Velar
Plosive p t d̥* k
Nasal m n ɲ
Tap or Flap ɾ
Fricative f v s z
Lateral approximant l ʎ
* This is phonated in slack voice (the glottal opening is slightly wider than in modal voice, i.e. slightly more devoiced than regular /d/)
Co-articulated Plosive
Approximant ʍ
Stop t͡s d͡z
Ejective t͡s' t' k'

Vowels
Front Back
Close i
Close-mid e ø o
Open-mid ʌ
Open a
Diphthongs: ei ai
Nasalised Vowels: ã õ

Transliteration
Characters in transliteration correspond to those used in the IPA, excluding;
IPA Symbol Transliteration
ʌ u
d
ʍ w
ɾ r
ɲ ń
t͡s c
d͡z j
ʎ y
k'
t'
t͡s'

Phonotactics
Syllable Structure: (C)V
Fixed Syllable Stress: In general, penultimate. Irregular stress is marked in transliteration with an acute accent on the vowel.
Rhythm Type: Trochaic (every even syllable stressed from the right).

Grammar

Language Type: Synthetic, Fusional. However there are remnants of agglutination.
Word Order: VSO
Noun-Phrase Order: Modifier-final, ADV.V Subject.ADJ Object.ADJ

Nouns
There is very little inflection of nouns in Dzelina, more resembling analytic languages. However, much of the information usually encoded on nouns in fusional languages is coded on the verbs or other particles.
Plurality: Coded on the verb, in terms of singular and plural. The postposition ke can be used after noun, followed by a number to specify the number of a noun. ke in this case acts as an adjective.
Count and Mass Nouns: Can be inferred from whether noun represents an individual or collective.

Verbs
Morphosyntactic Alignment: Neutral. Except for sentences with pronouns, in which case (AS)(P), Nominative-Accusative. This is a relic of the nominative-accusative case system.
Syntax:
Verb Subject Object
VERB NOUN
VERB NOUN NOUN
Suppletion: Present for Tense but not Aspect. As defined on WALS; ' Suppletion is defined as the phenomenon whereby regular semantic relations are encoded by unpredictable formal patterns. Cases where the paradigmatically related forms share some phonological material are examples of weak suppletion, as in English buy vs. bought, while cases with no shared phonological material are instances of strong suppletion, as in English go vs. went.'
Tense: Past, present and future are distinguished, along with two past remoteness distinctions. These are Hesternal (yesterday and prior to yesterday's past) and Hodiernal (today's past). Irregular verbs condense HOD and HEST into simply past (some exceptions to this exist and are notated). There are four relative tenses; Pluperfect, Future-in-the-Past, Future Perfect, Future-in-the-Future.
Grammatical Aspect: A simple binary between Perfective and Imperfective is used. Present Perspective aspect does not exist, thus the absolute present tense has no aspect. The imperfective particle negates the perfective and converts the verb to imperfective aspect.
Absolute (Perfective)
Singular Plural
Hesternal VERB-fa VERB-ko
Hodiernal VERB-wa VERB-co
Present* VERB-ke VERB-do
Future VERB-re VERB-dore

Relative (Perfective)
Singular Plural
Pluperfect VERB-se VERB-cero
Future-in-the-Perfect VERB-seke VERB-sado
Future Perfect VERB-kewa VERB-cere
Future-in-the-Future VERB-kese VERB-dose

Imperfective
Singular Plural
Past e-VERB te-VERB
Present eifo-VERB eifa-VERB
Future teno-VERB tefa-VERB
*Note that there is no Present Perfective so the conjugations listed have no aspect.
**Note that there are 24 irregular verb conjugations which vary in structure. They are detailed in the lexicon. There is suppletion among these conjugations.
Grammatical Mood: Dzelina distinguishes between Realis and Irrealis moods. The modal markings also relate plurality of the subject. There are no set affixes or particles to signify realis or irrealis, they are fused to the following modalities.
Modality: Propositional, Epistemic (PE). Event, Deontic (EO). Event, Dynamic (EY). Conditional, (CO). Others (OT). Almost all Irrealis moods appear in a binary with another mood. Dzelina distinguishes between situational and epistemic modal markings. The Speculative and Conditional moods are often used for formal language (see example phrases).
- PE: Speculative – IRR, -leze (SG), -lezi (PL)
- PE: Deductive/Assumptive – IRR, ­*-leta* (SG), -yeda (PL)
- EO: Permissive – IRR, -yeva (SG), -yeva (PL)
- EO: Obligative – IRR, -yeya (SG), -yewø (PL)
- EY: Abilitive – IRR, -mewe (SG), -met̍o (PL) – will nasalise /a/ or /o/ if preceding
- EY: Volitive – IRR, -dore (SG), -do (PL)
- CO: Implicative – IRR, -ńora (SG), -yo (PL)
- CO: Predictive – IRR, -ńota (SG), -ńodo (PL)
- OT: Purposive – IRR, ­-yapa (SG), -yavo (PL)
- OT: Resultative – IRR, -k̍ore (SG), -k̍oda (PL)
- OT: Negative – REAL, ro- (SG), ra- (PL) - appears before verb root | ~ra appears after root
- OT: Imperative – REAL/IRR, - ta (1.SG), -t̍asa (2/3.SG), -k̍ãmi (PL)
- OT: Desiderative – IRR, -reto (1.SG), -det̍e (2/3.SG), -yora (PL)

Other Parts of Speech
Personal Pronouns: Use nominative-accusative morphosyntactic alignment. Once again this is a relic of the nominative-accusative case system. A group of people of both sexes are referred to as feminine (this is likely due to Dzeli matrilineality).
Singular
Nominative Accusative
1P aide ado
2P se so
3P, feminine li re
3P, masculine wo wota
3P, inanimate ai ve

Plural
Nominative Accusative
1P, inclusive yola yodo
1P, exclusive yońe yońo
2P seya seyado
3P, feminine re
3P, masculine wo ota
3P, inanimate la lapo
Demonstratives: Appear immediately before noun. There are three categories of demonstrative. The demonstrative word is also used as the definite article (there is no distinct word for a definite article). Can be used to form a relative clause, in which case they appear after the noun which the clause is attached to and before the noun being attached.
- α: Very close (arm’s length away, speaker’s body), also your location
- β: Near, viewable (can be used instead of γ in storytelling and poetic language, for persons only)
- γ: FaCan’t be seen (also abstract and astronomical objects, however can be used instead of β in storytelling and poetic language.)
Demonstrative/Definite Article Nominative Accusative
SG.α te si
SG.β no no
SG.γ t̍e t̍e
PL.α teso sit̍o
PL.β nowe novo
PL.γ reda redo
Interrogative Pronouns: Appear immediately before noun. Distinguished in semantic categories. These pronouns are also used in the formation of content questions (see below).
Interrogative
Adjective liti
Person eiji
Thing airi
Place k̍eri
Time airati
Means lipi
Reason airado
Indefinite Pronouns: There are no indefinite pronouns in Dzelina, however a substitute is formed by using the Existential construction, a verb (such as t̍e ‘to exist’) is used plus a noun to form an indefinite pronoun version of that noun (e.g. person – somebody). See notes on the existential construction below.
Possessive Pronouns: Exact correspondence to Personal Pronouns. However, first person plural and singular exclusive and inclusive first person are not allowed.
Possessive
Singular Plural
1P aiya X
2P eito etodo
3P, feminine eiteri etero
3P, masculine eitewo ewova
3P, inanimate zaca zaco
1P, exclusive X lodø
1P, inclusive X lõnø
Quantifier Pronouns: Invariable. Formally different to conjunctions. Unlike the indefinite pronouns, these do not require the existential construction.
Quantifiers
Invariable
anywhere k̍eriya
anybody eijiya
any ya
every løpaka
everyone eipaka
everything pafa
nobody oweya
either lite
many ówete
Adjectives: Comparative adjectives appear after adjective/adverb. The superlative removes the need for a definite article. 'V' (Verb) Comparatives are used to modify adverbs (plural when the subject is plural). Possessive adjectives are suffixes to the noun.
There are two possessive classes; inalienable and alienable. Alienable possession expresses contingent association between the noun and possessor, while inalienable possession expresses necessary association between the noun and possessor. However there are exceptions to this, most notable are; blood relations are alienable while marriage-relations are inalienable, transport vehicles and animals are inalienable and all illnesses are alienable.
Comparative
Singular Plural
Subject, positive comparative aimit̍e ãmido
Subject, positive superlative aimit̍ø ãmidø
Object, positive comparative okate ókato
Object, positive superlative ókatø okatø
Subject, negative comparative sepa sepina
Subject, negative superlative sepave sepove
Object, negative comparative órena óreno
Object, negative superlative oreva orevo
Verb, positive comparative tizate dizoto
Verb, positive superlative tizatø dizotø
Verb, negative comparative tizãna diyãno
Verb, negative superlative tizave dizove

Possessive
Alienable Singular Plural
1P noya X
2P eito eđo
3P, feminine et̍eki et̍eko
3P, masculine eiwo et̍ero
3P, inanimate zak̍a zako
1P, inclusive X loto
1P, exclusive X lõno
Inalienable Singular Plural
1P noya X
2P eito eđo
3P, feminine eteki eteko
3P, masculine eiwo etero
3P, inanimate zaka zako
1P, inclusive X lodo
1P, exclusive X lonø
Definite Article: Agreeing demonstrative pronoun used as definite article, as mentioned before.
Indefinite Article: There are no indefinite articles in Dzelina. A noun with no definite article can be interpreted as indefinite or definite. The word for ‘one’ cannot be used.
Conjunctions: Nominal and Verbal conjunctions are differentiated. ‘And’ and ‘with’ are differentiated. Noun phrase 'and' and Verb phrase 'and' are also differentiated.
Co-ordinating
and (Verb phrase) a
and (Noun phrase) ni
with it̍eya
but oma
because lada
yet wete
therefore ok̍ana
also yaro
for aida
or te
nor ate

Clauses
Forming Polar Questions: Rise of intonation and le~ particle at initial place in sentence. Often the le~ particle is omitted in informal speech.
Forming Content Questions: Specific interrogative placed at the end of clause, with particle le~ at the beginning of sentence, although this is optional (typically reserved for formal contexts).
Forming Sub-clauses: Subordinator auxiliaries yo~ and fi~ transform following clause into the subject and object respectively. Adverbs pertaining to the clause come before the fi~ prefix but after the yo~ prefix. If the fi~ accusative clause does not have an object then yo~ is used. Note that this still acts as if there is a case system but it only needs to function by differentiating subject and object.
Forming Relative Clauses: The relative clause follows the noun. The relative particle precedes the relative clause, o~ (singular) and ze~ (plural). The singular and plural forms of the particle denote the plurality of the noun which the relative clause is formed after. Relative clauses are negated in the same manner as regular verb phrases except they always take the plural form but it comes before the verb root.
Order of Postpositions: V S O pos. This means 'S is postposition to O'. There is no person marking on Dzelina postpositions.
Order of Adverbs: When listing several adverbs, a rough guide is Probability > Time > Location > Means/Method, where probability comes first and means/method last.
Order of Adjectives: When listing several adjectives, a rough guide is Characteristics > Relationships.
Constructions: Two particles must appear to form a construction. The Delta (δ) particle appears before the verb but after the adverb. The Epsilon (ε) particle appears in between the subject and object of the sentence. The constructions found in Dzelina are the Reciprocal/Reflexive, Passive (which also inflects for plurality of the subject), Applicative (Benefactive object only, both transitive and intransitive bases), Causative (Periphrastic, morphological but no compound), Nominalisation (semantic) and the Existential (which substitutes for indefinite pronoun as mentioned above).
- Reciprocal/Reflexive – -mi-, appears between root and tense marking.
- Passive – k̍u~ (δ.SG), -ko (ε.SG), k̍e~ (δ.PL), -ke (ε.PL)
- Applicative – ewa~ (δ), ~ma (ε)
- Causative – ne~ (δ), ~ona (ε) * When the verb is lo or ends in ­-ozo/-oso the δ particle is e~
- Nominalisation (Semantic) – a- prefix can be attached to a verb root.
- Existential - ome~ (δ), ~sa (ε)

Example Phrases and Sayings

talereto ńara - [ˌtaleˈɾeto ɲaɾa] - Good day (addressing one person, formal. Literally; 'I wish you joy')
daroreto ńara - [ˌd̥aɾoˈɾeto ɲaɾa] - Good day (addressing multiple people, formal. Literally; 'I wish you joy')
ńara - [ɲaɾa] - Hi/Bye (Informal. Literally; 'joy')
ledukeleze - [ledʌkeˈleze] - Please (Raised intonation towards the end. Literally; 'If willing?')
ewa - [eʍa] - Thanks (Literally; 'Grace', in terms of 'courteous good will')
k̍i ni ro - [k'i ni ɾo] - Yes and no
k̍ateke aide so - [k'ateke aid̥e so] - I love you (romantically)
cesake aide so - [t͡sesake aid̥e so] - I love you (as family or as a very close friend)
eipãnireleta yo reveke eijiya sapãme - [eipãniɾeˈleta ʎo ɾeveke eid͡ziya sapãme] - Hard work pays off (Idiom/Proverb. Literally; whoever waters the soil will blossom)

Thank you very much for reading!
submitted by Otnerio to conlangs [link] [comments]

FlowCards: A Declarative Framework for Development of Ergo dApps

FlowCards: A Declarative Framework for Development of Ergo dApps
Introduction
ErgoScript is the smart contract language used by the Ergo blockchain. While it has concise syntax adopted from Scala/Kotlin, it still may seem confusing at first because conceptually ErgoScript is quite different compared to conventional languages which we all know and love. This is because Ergo is a UTXO based blockchain, whereas smart contracts are traditionally associated with account based systems like Ethereum. However, Ergo's transaction model has many advantages over the account based model and with the right approach it can even be significantly easier to develop Ergo contracts than to write and debug Solidity code.
Below we will cover the key aspects of the Ergo contract model which makes it different:
Paradigm
The account model of Ethereum is imperative. This means that the typical task of sending coins from Alice to Bob requires changing the balances in storage as a series of operations. Ergo's UTXO based programming model on the other hand is declarative. ErgoScript contracts specify conditions for a transaction to be accepted by the blockchain (not changes to be made in the storage state as result of the contract execution).
Scalability
In the account model of Ethereum both storage changes and validity checks are performed on-chain during code execution. In contrast, Ergo transactions are created off-chain and only validation checks are performed on-chain thus reducing the amount of operations performed by every node on the network. In addition, due to immutability of the transaction graph, various optimization strategies are possible to improve throughput of transactions per second in the network. Light verifying nodes are also possible thus further facilitating scalability and accessibility of the network.
Shared state
The account-based model is reliant on shared mutable state which is known to lead to complex semantics (and subtle million dollar bugs) in the context of concurrent/ distributed computation. Ergo's model is based on an immutable graph of transactions. This approach, inherited from Bitcoin, plays well with the concurrent and distributed nature of blockchains and facilitates light trustless clients.
Expressive Power
Ethereum advocated execution of a turing-complete language on the blockchain. It theoretically promised unlimited potential, however in practice severe limitations came to light from excessive blockchain bloat, subtle multi-million dollar bugs, gas costs which limit contract complexity, and other such problems. Ergo on the flip side extends UTXO to enable turing-completeness while limiting the complexity of the ErgoScript language itself. The same expressive power is achieved in a different and more semantically sound way.
With the all of the above points, it should be clear that there are a lot of benefits to the model Ergo is using. In the rest of this article I will introduce you to the concept of FlowCards - a dApp developer component which allows for designing complex Ergo contracts in a declarative and visual way.
From Imperative to Declarative
In the imperative programming model of Ethereum a transaction is a sequence of operations executed by the Ethereum VM. The following Solidity function implements a transfer of tokens from sender to receiver . The transaction starts when sender calls this function on an instance of a contract and ends when the function returns.
// Sends an amount of existing coins from any caller to an address function send(address receiver, uint amount) public { require(amount <= balances[msg.sender], "Insufficient balance."); balances[msg.sender] -= amount; balances[receiver] += amount; emit Sent(msg.sender, receiver, amount); } 
The function first checks the pre-conditions, then updates the storage (i.e. balances) and finally publishes the post-condition as the Sent event. The gas which is consumed by the transaction is sent to the miner as a reward for executing this transaction.
Unlike Ethereum, a transaction in Ergo is a data structure holding a list of input coins which it spends and a list of output coins which it creates preserving the total balances of ERGs and tokens (in which Ergo is similar to Bitcoin).
Turning back to the example above, since Ergo natively supports tokens, therefore for this specific example of sending tokens we don't need to write any code in ErgoScript. Instead we need to create the ‘send’ transaction shown in the following figure, which describes the same token transfer but declaratively.
https://preview.redd.it/id5kjdgn9tv41.png?width=1348&format=png&auto=webp&s=31b937d7ad0af4afe94f4d023e8c90c97c8aed2e
The picture visually describes the following steps, which the network user needs to perform:
  1. Select unspent sender's boxes, containing in total tB >= amount of tokens and B >= txFee + minErg ERGs.
  2. Create an output target box which is protected by the receiver public key with minErg ERGs and amount of T tokens.
  3. Create one fee output protected by the minerFee contract with txFee ERGs.
  4. Create one change output protected by the sender public key, containing B - minErg - txFee ERGs and tB - amount of T tokens.
  5. Create a new transaction, sign it using the sender's secret key and send to the Ergo network.
What is important to understand here is that all of these steps are preformed off-chain (for example using Appkit Transaction API) by the user's application. Ergo network nodes don't need to repeat this transaction creation process, they only need to validate the already formed transaction. ErgoScript contracts are stored in the inputs of the transaction and check spending conditions. The node executes the contracts on-chain when the transaction is validated. The transaction is valid if all of the conditions are satisfied.
Thus, in Ethereum when we “send amount from sender to recipient” we are literally editing balances and updating the storage with a concrete set of commands. This happens on-chain and thus a new transaction is also created on-chain as the result of this process.
In Ergo (as in Bitcoin) transactions are created off-chain and the network nodes only verify them. The effects of the transaction on the blockchain state is that input coins (or Boxes in Ergo's parlance) are removed and output boxes are added to the UTXO set.
In the example above we don't use an ErgoScript contract but instead assume a signature check is used as the spending pre-condition. However in more complex application scenarios we of course need to use ErgoScript which is what we are going to discuss next.
From Changing State to Checking Context
In the send function example we first checked the pre-condition (require(amount <= balances[msg.sender],...) ) and then changed the state (i.e. update balances balances[msg.sender] -= amount ). This is typical in Ethereum transactions. Before we change anything we need to check if it is valid to do so.
In Ergo, as we discussed previously, the state (i.e. UTXO set of boxes) is changed implicitly when a valid transaction is included in a block. Thus we only need to check the pre-conditions before the transaction can be added to the block. This is what ErgoScript contracts do.
It is not possible to “change the state” in ErgoScript because it is a language to check pre-conditions for spending coins. ErgoScript is a purely functional language without side effects that operates on immutable data values. This means all