This is one of the scripting languages used in the Source Engine. I believe the first game that supported it was Left 4 Dead 2, it was certainly the first time I heard of it.
7k is just a random number. At that number, it can be a lot of lines or very few - depends on what your scripting language can do. It's certainly not a selling point!
there are many such headlines on HN. "XXX in Y lines of code". I can't see in what respect the number of lines of code matters. Regardless of what the scripting language actually does, I'd rather have more lines of readable and maintainable code, than less lines of obfuscated code.
I think it's an important signifier for how easily you could read, and hopefully grok, the entire codebase. At 7k lines, I, who have no experience writing a scripting language, might be enticed to take a look. At 100k lines, I'm not sure where to start, and thus likely won't.
I agree it isn't, however it has been noted here that Valve has used Squirrel in some of their games including L4D2 and Portal 2. Also it has been used in FF Crystal Chronicles: My Life As A King as well. So much like Lua, it is used in video game development as an embedded scripting language.
I like writing languages, but I really don’t understand what possesses someone to write a new language that’s essentially a syntactic reskin of the same set of imperative/OO features found in most popular languages. We don’t need new syntax—we need new semantics.
It's a flyweight language which makes it suitable for embedded devices. It brings some modern language conveniences to the embedded world; (eg. it's the language electric imp uses for the IoT devices: http://electricimp.com/)
It's a relatively (10+ years) old language, and it is innovative in that it combines a scripting language with ref counting as opposed to garbage collection. How many other scripting languages do this?
Reference counting is a form of garbage collection, although a very bad one. And I don't know about the others, but Python does use reference counting, although it combines it with another garbage collection algorithm to avoid some of reference counting's most egregious failures (it not being able to collect cycles).
Reference counting isn't even very bad; for certain patterns of allocation, it's even optimal. However, for certain people, having to manually deal with cycles is enough to switch to a mark-and-sweep type collection mechanism.
I looked at it before - it seemed nice, but ... in those 13 years since the release, Lua has amassed the critical support to take over the world (LuaJIT2, FFI, stuff like Terra, MoonScript, Lua-in-JS, ...).
Lua and Squirrel being so close, it makes little practical sense to use Squirrel in a new project. Maybe for fun projects.
(And for something that's a little different, I recommend a look at https://github.com/aardappel/lobster - it's a breath of fresh air among scripting languages; practically, I still recommend Lua over anything else)
Squirrel is roughly similar (in the grand scheme of things) to Lua and JavaScript, with more of a (C++-style) OO flavor. Its main selling point is that it uses reference counting instead of a tracing garbage collector for automatic memory management. With a reference counted language, you pay a small but predictable cost every time something goes out of scope, whereas tracing garbage collectors coalesce all of these small costs into one big collection cycle run every few milliseconds. The tradeoff is that while garbage collected languages provide better throughput (due to cache effects, heap fragmentation, etc) for batch processing jobs, reference counted ones are much more suitable for realtime and soft realtime applications that would much rather take a constant factor hit to throughput if it eliminates the danger of a multi-millisecond GC scan causing the application to miss a deadline.
As such, Squirrel enjoys some popularity as (and was indeed created to serve as, by an engineer that had grown frustrated with the experience of embedding Lua in FarCry) a scripting language for video games, being most notably used in some of Valve's newer games, like Left 4 Dead 2, Portal 2, and CS:GO. The rest of the engine code usually dwarves the performance impact of "scripts" in most games, but GC scans can still cause games embedding other scripting languages to miss a frame presentation deadline, resulting in either a nasty tear or a completely missed frame. There are ways that tracing garbage collected languages can attempt to combat this (like incremental GCs), but some engine programmers have decided to accept the throughput cost for the peace of mind that reference counting provides.
EDIT: I had more but HN is silently ignoring some of my edits for some reason. Is there a post length restriction for new users, or have I tripped some kind of bizarre content filter?
Regarding 'small but predictable cost'... that's not entirely true.
In an earlier life I worked on an object-oriented operating system based around reference counting, called intent (you won't have heard of it), and it was horrible. One of the major issues with refcounting is that when you dereference an object, it may be freed then and there... which dereferences any dependent objects... which may be freed... which dereference more objects, etc. So any time you dereference your object, you may end up doing an arbitrary amount of work, which isn't good for real-time performance.
You can usually avoid this by being sensible about when you reference and dereference objects, but now you're effectively doing manual memory management, which slightly defeats the whole point of the exercise.
In our case, we also cocked things up with poor design --- we ended up reffing and dereffing objects a lot, which meant we constantly ran into issues where we needed to deref an object with a lock held, which isn't a good idea. A better designed system wouldn't have done this. Objective C's memory pools work very well: whenever you create an object it gets added to a list, and all objects on the list are dereffed when your app exits to the event loop; so not only do you only need to manually ref objects if you want to keep them until the next event, which is much easier to code for, but the object free overhead all happens at a well-defined point which is usually app idle time.
So while reference counting can help, it still doesn't make the problem go away, and when you're using it you have to remember that you are using it and ensure that your app is designed to cope.
(It's also worth remembering that modern incremental garbage collectors solve the same problem in a different way, and without some of refcounting's problems.)
I agree. With embedded "scripting" languages, you have to differentiate between two kinds of users that don't always (or even usually) overlap: the people writing the scripts (who may or may not even consider themselves to be programmers), and the engine programmers embedding the language in their game engine or other application. IMHO, the current homepage isn't really doing a great job of appealing to either group, and it'd be much more effective if it directly explained the advantages of its implementation to the engine dev crowd; that might alienate the script programmers a bit, but thinking about AAA game development, they're almost never the ones that get to pick the language anyway.
It's good enough by itself if you tell programmers not to create circular references and explicitly use weak references if they really need to. That opens you up to memory leaks from accidental circular references, of course, but again, it's a worthwhile tradeoff for many projects.
When you have a scripting language with a GC you are targeting a class of users who can't or don't want to be careful enough to prevent circular references.
Circular references aren't easy to avoid, sometimes there can be more than 3 levels of indirection. The mental overhead of preventing circular references is often times as high as manually managing memory yourself.
If it was as good as you suggest, every new language would be using reference counting and not occasionally run mark and sweep.
the language has been around for quite a few years, but
"Squirrel is a high level imperative, object-oriented programming language, designed to be a light-weight scripting language that fits in the size, memory bandwidth, and real-time requirements of applications like video games."
could tell you it's smaller than ruby/python/php, and designed to be OO compared to Lua or others. There is also a list of features, I don't think the authors of the page should have a list of "we are better than X" for every value of X.
No, but my point was more that the very bare samples on the page don't tell me much about what's interesting about Squirrel or why I would use it. Just how to declare a class.
Normally I'm agnostic on the issue of type safety, but the best game scripting language I used was a proprietary one that was strongly typed with no implicit casting (it also had no GC).
In my experience, a game script most often uses handles to certain types of game engine objects (cameras, vehicles, characters, the player...) and although it's often necessary to define your own types/classes/objects they're rarely complex. Usually you're just iterating over collections of stuff, exploring trees, querying the current state of the game engine or dealing with some kind of internal state.
Because of this, explicit type safety isn't really the burden it can be in other use cases, even for prototyping. Because you generally want hot reloads and for a game script to fail gracefully without stopping the engine, you probably want to add as much sanity checking as possible before it gets to that point.
To have it short: With the syntax of C/Java and scripting features, what does this language differentiate from JavaScript?
There are some nice features added, but I think that some of them are also covered by newer JS versions. I really don't want to argue to much about JS, but other persons will ask the same question, I guess.
The line "optional 16bits characters strings" also makes me worry a little, since I know how much trouble an alternate approach (8bit strings vs. Unicode) there is in Python (which led to major changes in Python 3). It just makes to much headaches.
If the author is reading here, you might also consider linking to a page of your personal website, that is accessible (I get directory listing denied).
> The line "optional 16bits characters strings" also makes me worry a little
Especially since the other option isn't 32 bits, like it was in Python versions before 3.3 (which implemented PEP 393: "Flexible String Representation"). So you can only choose between broken Unicode (no characters outside the basic multilingual plane) or no Unicode at all.
https://computerscomputing.wordpress.com/2013/02/18/lua-and-...
My first thought is "Squirrel doesn't have Mike Pall" but that isn't quite fair to the author or the intentions of the language.