Blog moved to http://www.andrevdm.com/

Tuesday 18 November 2014

F# FunScript with NancyFx and Ractive


Getting started


I just started with FunScript and got stuck with a few of the basics. Here is how I got it all working

FunScript is a library that compiles F# to JavaScript. This lets you write strongly typed client side code in F#.  It takes advantage of many of the F# features, async workflows (no callback) etc. Take a look at the FunScript page for more information.

Ractive.JS is a template drive reactive UI library created initially at the Guardian. The Ractive tutorials are very clear and definitely worth using

NancyFx is a lightweight HTTP framework for .NET. It is less popular in the F# world than C# but still works very well from F#.


Read the official docs at http://funscript.info/. The introduction and tutorials are very good and will give you a good background in what FunScript is and how it works.

The code below is based on these demos as well as one of Alfonso's projects https://github.com/alfonsogarciacaro/PinkBubbles.Informa


In this post I'll demonstrate the following
  1. Serving data as JSON with NancyFx
  2. Declaring types in F# and using them in FunScript code
  3. Reading the JSON with FunScript and casting to the strongly typed type
  4. Rendering a basic Ractor template
Although I'm using NancyFx here any web framework should work much the same way.


Creating the solution


  1. Open Visual Studio
  2. Create a new F# console project
  3. Add the following nuget references
    • FunScript
    • funscript.TypeScript.binding.lib
    • nancy.hosting.self
  4. Enable type providers when prompted
  5. Get FunScript.HTML.dll and FunScript.HTML.Ractive.dll.
    I got them from Alfonso's github demo, they are included in my github repo
  6. Download and store the two DLLs in a local folder (e.g. lib)
  7. Add reference to the two DLLs

NancyFx - Serving some data

The code to serve a simple page over nancy fx
Run the projet. If you get permission errors, you will need to run studio as an administrator. Browse to http://localhost:6543/ping This snippet
  1. Creates a nancyFx module "IndexModule"
  2. Defines a route for GET /ping

    This may look a bit strange but all it does is define that whenever a GET request is received for /ping the current DateTime string should be returned. 'box' is called to box the result as on object as that is what NancyFx is returning
  3. Starts the host on port 6543


F# Types


As a simple example all types in the current assembly will be returned. Below is the function that does this as well as the AssemblyTypes record for this data.

Serving the data as JSON



The /data route has been added. It calls the getAssemblyTypes function defined above and returns the data as JSON.
To format the data as JSON you call NancyFx's FormatterExtensions.AsJson method
If you run the project and open http://localhost:6543/data in your browser you should see the JSON data being returned

FunScript - Index page and template


At this point we have a simple HTTP server that can serve JSON data. Now the  scaffolding to get FunScript working needs to be added Create an index.html page that will be used to display the data
This page contains the following items
  • A "ractive-container" div into which the template will be rendered
  • A "ractive-template" script block containing the template
  • A script include for ractive
  • A script include for the generate Javascript from the FunScript compiler. I'll show how this works below

Add the NancyFx route for the index page. I'm loading the page using a File.ReadAllText(). (NancyFx can serve static pages but I'm avoiding the discussion about bootstrappers, resource directories etc here. You should read the NancyFx docs about this if you are using this in a real project)



FunScript - Compiling F# to JavaScript


Here is the F# code that compiles the F# to JavaScript. I'll comment on each section of the code below
Since this is the F# that needs to be compile by the FunScript compiler to JavaScript it needs the ReflectedDefinition attribute on the module
The compile function does the actual compilation. This should be familiar from the FunScript guide
The start function does the following
  •   Creates a Ractive instance
  •   Creates the initial application state
  •   Starts a Ractive loop (mainLoop function)

The application state is stored in record defined like this
Here is the start function. Note the creation of the initial RactiveState. Also note that JavaScript is expecting an array not a list or a seq.

That is the main scaffolding for getting FunScript and Ractive working. Next the mainLoop function which contains the F# code that is going to run in the browser.

This function does the following
  • Check if a reload of the data was requested
  • Call our /data endpoint to get the JSON
  • Cast the JSON data to the stronly typed F# type
  • Mark as updated


This looks a little complex but the core of it is the following
  1. A WebRequest instance is created.
  2. AsyncGetJSON<> is called and casts the JSON to our F# type.
    This is the real magic of FunScript. The AsyncGetJSON makes an AJAX call from the browser for you and then lets you work with your data in a strongly typed way.
  3. The two Globals.console.log calls show the typed data being used.
  4. Finally an updated state is returned


Serving app.js



All that is left is to get the FunScript compile F# served as app.js
Not terribly pretty but simple enough. A text response object is create a containing the javascript compiled from the F# (by the Web.compile function). Then the content type is changed to application/javascript. Finally the response object is returned

If you run the project and browse to http://localhost:6543 you will see your data being displayed. The code itself is mostly scaffolding and is pretty simple. With the basics working you can now continue with the tutorials from Ractive and FunScript to take things further.

Code


The full code is on github https://github.com/andrevdm/FunScriptRactorAndNancyDemo

Feel free to suggest any improvements

Tuesday 28 October 2014

Connecting to Microsoft SQL server from Clojure

For some reason I battled to find a good reference on using MSSQL from Clojure. Here is how I got is working.

There is a choice between the proprietary MS driver and an open source jTDS one. I opted for jTDS. see http://jtds.sourceforge.net/faq.html


jTDS JAR

Get the jTDS-n.n.n.JAR from the zip file on SourceForge and place it on your class path.

To get your class path you can run
   lein classpath


Project dependencies

Add the jTDS dependency to your project.clr
   [org.clojure/java.jdbc "0.3.5"]


Authentication

Getting the JDBC connection string just right was where I had issues. This is what I ended up with

  (let [sql-db {:subprotocol "jtds:sqlserver",
               :subname (str "//" sqlServer "//" sqlDb ";useNTLMv2=true;domain=" domain),
               :user userName,
               :password password}]


Where
  • sqlServer is the SQL server machine name / IP
  • sqlDb is the default SQL database
  • domain is the domain for your user account
  • userName is the userName
  • password is the password


SSO

If you want to use single sign on (SSO) / integrated security then you need the ntlmauth.dll from the jTDS download zip. Its in the /x64/SSO folder. The DLL must be placed in the same folder as the jtds JAR. This only works on a windows host

If you use SSO remove the :user and :password from the map above




Thats it. Good luck

Friday 25 July 2014

Unit testing embedded C projects with seatest

I recently wrote my first embedded C project and was quite surprised to find that unit tests were not as widely used as I would have expected. There seems to be a general opinion that unit tests are less useful for embedded development that for application development. I find this very strange because debugging embedded systems is hard compared to application development.

Fortunately not everyone agrees with this sentiment and in the end it was relatively easy to get unit tests working thanks to SeaTest (https://code.google.com/p/seatest/). SeaTest is simple and specifically designed for embedded-c projects.

To get it working for a Microchip MPLabX project I did this
  1. Created a _test directory in my main projects directory
  2. Created my unit tests in this directory in test.c
  3. In this directory created a bash script named "test"
  4. Copied seatest.c and seatest.h into the directory
  5. Created dummy PIC include files in this directory
    1. xc.h
    2. xc.c
    3. pic18f4550.h
  6. Created a plib directory under _test containing
    1. timers.h
The idea being that the my test directory (_test) contained files for mocking the Microchip libraries to make testing possible. This turned out to be much simpler than I had feared.


Test runner bash script

#/bin/sh
gcc -std=c99 -o test.o -D TESTING -I . -I .. xc.c test.c ../buttons.c ../lcd.c ../dateTime.c ../timerUi.c ../timer.c seatest.c && ./test.o

Nothing fancy... It does this
  • Defines a TESTING constant
  • Includes the local directory for the "mock" libraries
  • Includes the parent directory for my actual code
  • Runs ./test.o if the compile succeeds

Mock/Stub files

Most of the stub files (timers.h etc) are empty files just to keep the compiler happy. I then just copied whatever definitions I needed from MPLabX's libraries to get the rest working.


Tests

Tests are then simply a matter of calling a function and using the seatest assert functions.

There definitely are things that are hard to test e.g. interrupt routines but as long as your code is modular you should usually be able to test the methods that e.g. the interrupt routine calls.

Overall I found this to be a very simple approach and it certainly helped me get my project up and running a lot faster with a lot more confidence

Sunday 16 February 2014

Parsing s-expressions in Clojure

Introduction


This is a quick look at parsing in clojure. First using instaparse and then writing the lexer and parser by hand. The comparison should illustrate how great instaparse is but also show that writing a simple lexer & parser is not as complex as some would think.

BTW this is my first clojure project so I may have got some of the idioms in the code incorrect. I'll update the code samples based on feedback here and on the project repo in github :)

The demo project


To demonstrate instaparse I'll be implementing a simple external DSL. The DSL should have the following characteristics
  1. Expressions written as sexprs
  2. External DSL - I'm not interested in using the clojure reader to read the sexpr for this demo
  3. Constrained - functions can only be defined in clojure not in the DSL itself. The functions available to the DSL must be strictly controlled.
All code is in the github repository (https://github.com/andrevdm/blog-clojure-sexpr-parse)

Instaparse

Using instaparse

Instaparse (https://github.com/Engelberg/instaparse) is a clojure library for generating a parser (and lexer) from a EBNF/ABNF. It is one of the easiest parser generators I've used, I highly recommend giving it a try.

 The grammar

The instaparse page has a nice introduction to the grammar syntax. Start there if you are not familiar with EBNF.

Here is the grammar that I'll be parsing

   S = (expression )*
    expression = list | vector | atom
    list = <'('> (expression )* <')'>
    vector = <'['> (expression )* <']'>
    atom = number | string | name
    number = #'\d+'
    string = <'"'> #'[^\"]+' <'"'>
    name = #'[a-zA-Z\+-]([0-9a-zA-Z\+-]*)'
    ws = #'\s+'




This is pretty standard EBNF. Some things to note

  1. Wrap an element in angle brackets to remove it from the output e.g.
  2. Match literal characters with single quotes. e.g. '('
  3. Regular expressions using #'regex'
  4. Remember to escape regex characters correctly. See the code example for the correct escaping

Again the instaparse page has a nice introduction that covers all of this.

The output parse tree


The output parse tree from instaparse can be in hiccup or enliven format. I'll be using the default hiccup format.

As an example here is the output for "(+ 1 2 3) 4"

   [:S
    [:expression
     [:list
      [:expression [:atom [:name "+"]]]
      [:expression [:atom [:number "1"]]]
      [:expression [:atom [:number "2"]]]
      [:expression [:atom [:number "3"]]]]]
    [:expression [:atom [:number "4"]]]]


Instaparse can visualise a parse tree using graphviz and rhizome (see https://github.com/Engelberg/instaparse#visualizing-the-tree). E.g. for the parse tree above you get this



Interpreting the parse tree

There are several ways to interpret the output from instaparse, e.g. using zippers or using the built in instaparse transformation function. However I chose to use simple recursive functions since it is so simple.

   (defmulti run (fn [s] (nth s 0)))
   (defmethod run :S [[s & es]] (last (doall (map run es))))
   (defmethod run :expression [[e t]] (run t))
   (defmethod run :atom [[a t]] (run t))
   (defmethod run :number [[n val]] (read-string val))
   (defmethod run :string [[s val]] val)
   (defmethod run :vector [[v & vs]] (vec (map run vs)))
   (defmethod run :name [[n & nn]] (first nn))
   (defmethod run :list [[l n & ls]] (let [args (map run ls)]
                                        (apply (methods (run n)) args)))



The multimethod's dispatch function gets the first item from each vector. Look at the parse tree above, you'll see that this will always be the type of the current element (:S or :expression or :number etc)

Each method then is responsible for destructuring its element type. E.g. the :number method must parse the number and return a string. The :vector method must return a vector. Each method calls the run multimethod recursively to get the lowest level atom

Notice that the :S method calls last on doall, which is called to force evaluation of the whole lazy seq. last is called to get the last value. I.e. the parser will return the last value evaluated just as clojure would.

The :list method is where the interpreter actually "runs" functions called by the DSL.



   (defmethod run :list [[l n & ls]] (let [args (map run ls)]
                                         (apply (methods (run n)) args)))




The parameters [ [l n & ls] ]  destructure the incoming element into
  1.  l = the :list
  2.  n = the name of the function as a :name element
  3.  s = the method arguments

Remember that a list is executed by treating the first expression as the function and the rest as the arguments to that function.


Once we have the arguments they must be evaluated by calling run for each argument
   (map run ls)

We get the name of the function to run
   (run n)

We look up the actual function to call in the methods map. It is this map that lets us control exactly which functions can be called. All together it looks like this
   (let [args (map run ls) (apply (methods (run n)) args)))


Full sample code

Here is the full code for the DSL parser and interpreter using instaparse
(ns cljsexp-instaparse.core
 (:require [instaparse.core :as insta]))

(def parse
 (insta/parser
 "S = (expression )*
 expression = list | vector | atom
 list = <'('>  (expression )* <')'>
 vector = <'['> (expression )* <']'>
 atom = number | string | name
 number = #'\\d+'
 string = <'\"'> #'[^\\\"]+' <'\"'>
 name = #'[a-zA-Z\\+-]([0-9a-zA-Z\\+-]*)'
 ws = #'\\s+'"))

(def methods
 {"+" +
 "-" -
 "*" *
 "/" /
 "++" inc
 "--" dec
 "prn" println})

(defmulti run (fn [s] (nth s 0)))
(defmethod run :S [[s & es]] (last (doall (map run es))))
(defmethod run :expression [[e t]] (run t))
(defmethod run :atom [[a t]] (run t))
(defmethod run :number [[n val]] (read-string val))
(defmethod run :string [[s val]] val)
(defmethod run :vector [[v & vs]] (vec (map run vs)))
(defmethod run :name [[n & nn]] (first nn))
(defmethod run :list [[l n & ls]] (let [args (map run ls)]
                                       (apply (methods (run n)) args)))

Conclusion - instaparse

Instaparse is amazing. It makes writing a parser very easy indeed. A simple sexp parser and interpreter in less that 40 lines of clojure is a great result.


A simple recursive descent parser


Writing the lexer and parser by hand is an interesting exercise as it shows that its not too hard to do. However in my opinion it also shows how much simpler instaparse makes things even for simple projects.

For what it is worth note that there is no mutable state in this code. All the functions are pure. This made testing very easy.

Lexing



Lexing or tokenising a string is the process of converting the characters from the source code into higher level tokens (equivalent to taking individual letters and making words).

E.g. taking this character stream

 |   |   |   |   |   |   |   |   |   |   |   |   |
 | ( | i | f |   | ( | a | n | d | ( | a | b | c |

 |   |   |   |   |   |   |   |   |   |   |   |   |
And creating these tokens

 left-paren, if, left-paren, and, left-paren, abc

Each token has meta-data associated with it. Such as the line and column in the source file and the type of token (string vs name vs paren etc).

Tokenising the input means that the parser does not need to deal with individual characters but rather can work with higher level tokens. This greatly simplifies the design as the concerns of lexing the input and parsing the resulting tokens can be separated. In a recursive descent parser you could lex the next token on demand rather than lex everything first as I have here.

NB remember that the output of the tokeniser is a flat list of tokens. No meaning has yet been inferred from the source code


In the code above each token has the following clojure structure

{:type :xxx,
 :val xxx,
 :line xxx,
 :col xxx,
 :expressions []}


Each token has a
  1. Type (e.g. name/string/list)
  2. Value (e.g. the numeric or string value of the text)
  3. The line and column number that the token started in the source file
  4. A place holder for nested expressions


Matching the next token


(def tokenMap {:byChar { \( :lparen
                         \) :rparen,
                         \[ :lbracket,
                         \] :rbracket}
               :byRegex { #"'" parseString
                          #"\d+" parseNumber
                          #"[a-zA-Z\+\-\*\\\/\?_\$\<\>=]" parseName
                          #";" parseComment }})







Here there are two maps. The first identifies single character tokens such as brackets or parentheses. The second uses a regular expression to match the first letter of a token and defines the function that gets called to tokenise it.

For example if the tokeniser gets a semi-colon it calls the parseComment function which calls the parseRegex helper function. Below you can see these two methods. When a semi-colon is found the regex will match to the end of the line and the current position will be moved (moveRight) by the number of matched characters.

(defn parseRegex [state, typeName, token, re]
  (let [s (subs (currentLine state) (:col state))
        val (re-find re s)]
    ;Does the remainder of the line match the regex - it should!
    (if val

      (assoc
          (moveRight state (count val))
        :token token
        :val val)

      (throw (Exception. (str "Failed to parse " typeName))))))

(defn parseComment [state]
  (parseRegex state "comment" :comment #";.*"))




Moving in the input stream

Below is the moveRight function which moves right in the input stream. Notice that this takes the current position in a state argument and returns a new state as a result. I.e. nothing is mutated.

(defn moveRight [state by]
  "Move current position  1 char to the right, roll over to next line if required"
  (let [updated (assoc state :col (+ (:col state) by) )]
    (let [line (currentLine state)]
      (if (< (:col updated) (count line))

        ;Still space on current line, return it
        updated

        ;Move to next line
        (assoc
          state
          :col 0
          :line (inc (:line state)))))))



Running the tokeniser


Finally here are the two functions that control the tokenising

(defn- nextToken [state]
  "Gets the next token"
  (let [c (currentChar state)]
    (cond

     (nil? c) (clearToken state)

     ;Ignore white space
     (Character/isSpaceChar c) (recur (moveRight state 1))

     ;Check if a token can be found in the token map by character
     :else  (if-let [token ((:byChar tokenMap) c)]
              (assoc (moveRight state 1) :token token :val c)

              ;Nothing found so now search by regex
              ; Get the function associated with the first regex that matches and call that
              (if-let [r (first (filter #(re-matches (% 0) (str c)) (:byRegex tokenMap)))]
                ((r 1) state)
                (throw (Exception. (str "dont understand next token - " c state))))))))


(defn- tokenise [state]
  (loop [nextState (nextToken state), tokens []]
    (if (= :none (:token nextState))
      tokens
      (recur
       (nextToken nextState)
       (conj tokens {:line (:line nextState),
                     :col (:col nextState),
                     :val (:val nextState),:type (:token nextState)})))))





nextToken gets 1 next token
tokenise repeatedly calls nextToken until the whole input stream has been tokenised

Parsing


At this point the lexer has lexed the entire file and the parser can now parse the token stream.

The function that runs the parser is parseAll

(defn- parseAll [allTokens]
  (loop [expressions [], tokens allTokens]
    (let [r (parseExpression (first tokens) (rest tokens))]
      (if (= 0 (count (:expr r)))
        expressions
        (recur (conj expressions (:expr r)) (:tokens r))))))





But all the work is actually done in parseExpression. This is quite a long function that is just a large case statement. Not pretty but reasonably clear, hopefully.

(defn- parseExpression [token tokens]
    (case (:type token)
      (nil '()) [nil tokens]

      :name {:expr {:type :name,
                    :val (:val token),
                    :line (:line token),
                    :col (:col token),
                    :expressions []}
             :tokens tokens}

      :string {:expr {:type :string,
                      :val (:val token),
                      :line (:line token),
                      :col (:col token),
                      :expressions []}
               :tokens tokens}

      :number {:expr {:type :number,
                      :val (read-string (:val token)),
                      :line (:line token),
                      :col (:col token),
                      :expressions []}
               :tokens tokens}

      (:lparen :lbracket) (let [grp (if (= :lparen (:type token))
                                      {:start :lparen, :end :rparen, :type :list}
                                      {:start :lbracket, :end :rbracket, :type :vector})]
                            (loop [expressions []
                                   [loopToken & loopTokens] tokens]

                              (let [type (:type loopToken)]
                                (cond
                                 (or (nil? token) (= '() token)) (throw (Exception. (str "EOF waiting for :rparen")))

                                 (= (:end grp) type) {:expr {:type (:type grp)
                                                             :val (:type grp)
                                                             :line (:line token)
                                                             :col (:col token)
                                                             :expressions expressions}
                                                      :tokens loopTokens}

                                 :else (let [r (parseExpression loopToken loopTokens)]
                                         (recur (conj expressions (:expr r)) (:tokens r)))))))))





This function is switching on the first token and returning the matched token and the remaining tokens. For example when it gets a :name it returns an :expr of type :name and returns the rest of the tokens.

When parseExpression gets lparen or lbracket it will recursively loop through the tokens until the end of the list or vector, returning the tokens that have not been consumed.

Interpreting the syntax tree

Evaluating the syntax tree is similar to the code in the instaparse Version

(declare eval)

(defmulti run (fn [x] (:type x)))
(defmethod run :string [e] (:val e))
(defmethod run :number [e] (:val e))
(defmethod run :name [e] (:val e))
(defmethod run :vector [e] (vec (map run (:expressions e))))
(defmethod run :list [e] (do
                           (let [f (run (first (:expressions e)))
                                 args (map run (rest (:expressions e)))]
                             (apply (get funcs f) args))))
(defmethod run :default [e] (println "unknown: " e))


(defn eval [[car & cdr]]
  (let [r (run car)]
    (if (empty? cdr)
      r
      (recur cdr))))




Again a multimethod is used to recursively evaluate the syntax tree and as with the instaparse code only functions defined in the 'funcs' map may be executed.

Conclusion - hand written


The hand written lexer and parser are a lot longer than just using instaparse. However it is not that complicated to do manually. Personally I'll be using instaparse for 99% of my Clojure DSL needs but it is always good to know how to do it manually.

The full source code

(ns cljsexp-simple.core


(def funcs {"prn" println
            "+" +})

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(defn currentLine [state]
  "Gets the current line"
  (get (:code state) (:line state)))


(defn currentChar [state]
  "Gets the current charater"
  (get (currentLine state) (:col state)))


(defn moveRight [state by]
  "Move current position  1 char to the right, roll over to next line if required"
  (let [updated (assoc state :col (+ (:col state) by) )]
    (let [line (currentLine state)]
      (if (< (:col updated) (count line))

        ;Still space on current line, return it
        updated

        ;Move to next line
        (assoc
            state
          :col 0
          :line (inc (:line state)))))))


(defn parseRegex [state, typeName, token, re]
  (let [s (subs (currentLine state) (:col state))
        val (re-find re s)]
    ;Does the remainder of the line match the regex - it should!
    (if val

      (assoc
          (moveRight state (count val))
        :token token
        :val val)

      (throw (Exception. (str "Failed to parse " typeName))))))


(defn parseName [state]
  (parseRegex state "name" :name #"[a-zA-Z\+\-\*\\\/\?_\$\<\>=]+"))

(defn parseComment [state]
  (parseRegex state "comment" :comment #";.*"))

(defn parseString [state]
  (parseRegex state "string" :string #"'[^']+'"))

(defn parseNumber [state]
  (parseRegex state "number" :number #"\d+"))

(def tokenMap {:byChar { \( :lparen
                         \) :rparen,
                         \[ :lbracket,
                         \] :rbracket}
               :byRegex { #"'" parseString
                          #"\d+" parseNumber
                          #"[a-zA-Z\+\-\*\\\/\?_\$\<\>=]" parseName
                          #";" parseComment }})

(defn clearToken [state]
  (assoc state
    :token :none
    :val :none))

(defn- nextToken [state]
  "Gets the next token"
  (let [c (currentChar state)]
    (cond

     (nil? c) (clearToken state)

     ;Ignore white space
     (Character/isSpaceChar c) (recur (moveRight state 1))

     ;Check if a token can be found in the token map by character
     :else  (if-let [token ((:byChar tokenMap) c)]
              (assoc (moveRight state 1) :token token :val c)

              ;Nothing found so now search by regex
              ; Get the function associated with the first regex that matches and call that
              (if-let [r (first (filter #(re-matches (% 0) (str c)) (:byRegex tokenMap)))]
                ((r 1) state)
                (throw (Exception. (str "dont understand next token - " c state))))))))


(defn- tokenise [state]
  (loop [nextState (nextToken state), tokens []]
    (if (= :none (:token nextState))
      tokens
      (recur
       (nextToken nextState)
       (conj tokens {:line (:line nextState),
                     :col (:col nextState),
                     :val (:val nextState),:type (:token nextState)})))))

(defn- parseExpression [token tokens]
    (case (:type token)
      (nil '()) [nil tokens]

      :name {:expr {:type :name,
                    :val (:val token),
                    :line (:line token),
                    :col (:col token),
                    :expressions []}
             :tokens tokens}

      :string {:expr {:type :string,
                      :val (:val token),
                      :line (:line token),
                      :col (:col token),
                      :expressions []}
               :tokens tokens}

      :number {:expr {:type :number,
                      :val (read-string (:val token)),
                      :line (:line token),
                      :col (:col token),
                      :expressions []}
               :tokens tokens}

      (:lparen :lbracket) (let [grp (if (= :lparen (:type token))
                                      {:start :lparen, :end :rparen, :type :list}
                                      {:start :lbracket, :end :rbracket, :type :vector})]
                            (loop [expressions []
                                   [loopToken & loopTokens] tokens]

                              (let [type (:type loopToken)]
                                (cond
                                 (or (nil? token) (= '() token)) (throw (Exception. (str "EOF waiting for :rparen")))

                                 (= (:end grp) type) {:expr {:type (:type grp)
                                                             :val (:type grp)
                                                             :line (:line token)
                                                             :col (:col token)
                                                             :expressions expressions}
                                                      :tokens loopTokens}

                                 :else (let [r (parseExpression loopToken loopTokens)]
                                         (recur (conj expressions (:expr r)) (:tokens r)))))))))


(defn- parseAll [allTokens]
  (loop [expressions [], tokens allTokens]
    (let [r (parseExpression (first tokens) (rest tokens))]
      (if (= 0 (count (:expr r)))
        expressions
        (recur (conj expressions (:expr r)) (:tokens r))))))


(defn parse [code]
  (let [tokens (tokenise {:code code, :line 0, :col 0, :val :none, :token :none})
        result (parseAll tokens)]
    result))


;;;;;;;;;;;;;;;;;;;;;;;

(declare eval)

(defmulti run (fn [x] (:type x)))
(defmethod run :string [e] (:val e))
(defmethod run :number [e] (:val e))
(defmethod run :name [e] (:val e))
(defmethod run :vector [e] (vec (map run (:expressions e))))
(defmethod run :list [e] (do
                           (let [f (run (first (:expressions e)))
                                 args (map run (rest (:expressions e)))]
                             (apply (get funcs f) args))))
(defmethod run :default [e] (println "unknown: " e))


(defn eval [[car & cdr]]
  (let [r (run car)]
    (if (empty? cdr)
      r
      (recur cdr))))




Thursday 13 February 2014

Fixing assembly version conflicts in .net with AsmSpy

Occasionally you will get assembly version conflicts when building / running .net projects. Here is a quick overview of how to fix



Using AsmSpy

Using AsmSpy is the easiest way to find assembly version conflicts.

Get it from: https://github.com/mikehadlow/AsmSpy

Run it on the build output directory. E.g.
  AsmSpy c:\Projects\SomeProject\bin\Debug

It will display a list of references and the assemblies that use them.


For example

Reference: log4net
   1.2.13.0 by ABCD
   1.2.13.0 by XYZ
   1.2.12.0 by EEE

Here you can see that EEE is expecting a lower version of log4net that the rest of the assemblies.

This makes it very easy to spot the errors. 


Checking for errors manually

You can also manually check for errors by looking at the output window after a build. Using AsmSpy is a lot easier though