Skip to content
Apr 21 14

Core Society

by pixelpracht

There are striking similarities between my current project Core Society and Grove Script which was the first domain specific programming language I’ve ever written. It was a really fun project on two levels: It’s fun to come up with a domain specific language of your own and then write a parser for it and an interpreter, where, for a change, optimization for speed is actually justified. But once you have established that new interface to talk to your computer the *real* fun can begin: To figure out how to use that new language and see what you can do with it. When you already know a systems inner workings and can change it on a whim if something doesn’t please you the learning experience isn’t even painful. Core Society is like this. It’s about programming cellular automata.

Cellular Automata

Cellular Automata are a well known concept in computer science. Typically you have a regular grid of cells, each in one of a finite number of states. Based on a set of rules you can look at a cell and it’s neighbours and decide on the cell’s next state. Now you just apply these rules repeatedly to all cells in the grid. It’s amazing to think about the complexity that can emerge from even simple setups. Take Conway’s Game of Life for example where each cell has only two possible states (on/off) and where only four simple rules define a cells next state based on itself and the local neighbours. And despite that it’s not only capable of generating complex evolving patterns, you can even build a Universal Turing Machine with it!

Conway's Game of Life is a simple rule based system, quite in contrast to God's Game of Life

Conway’s Game of Life is a simple rule based system, quite in contrast to God’s Game of Life

But why simple rules and simple cell states? When I hear the word ‘Cell’ I think of the building blocks of all known living organisms, capable of self-replication and functionally quite complex. I wanted to step away from simple, rule based systems and towards the complexity of living cells. Why not build a cellular automaton where each cell is a fully programmable little computer? No external rules would decide how a cell’s state would change – only the initial programming of the cores would determine the course of the simulation.

The Cores

I’ve never written an emulator before but that was exactly what I needed to do next – just that I wasn’t emulating an already existing computer system but a system I had yet to define. A system specifically designed to serve as a programmable cell in a CA.

The architecture I’ve settled with acts like a von Neumann machine where instruction and data share the 256 16bit words of memory. That means the whole memory is mapped to an 8bit address space. Because I can’t use less than a word (16 bit) to encode an instruction I figured there are enough bits left to encode additional information along with the instruction – including the target address the operation is meant to operate on. All other computer systems I know don’t have instructions that operate directly on memory. Not only because they have a far larger amount of addressable memory but also because they are designed to run efficient on physical hardware and memory access is a bottleneck. Thus most CPU’s have a couple of registers to store binary data in. When you program for those CPU’s in assembler you typically spend a lot of instructions copying data between registers and memory and I was pretty excited about a chance to get away without it. Apart from that a lot of the basic instructions in CoreSociety’s assembly language will look familiar if you know other assembly languages. Here’s the full list.

After planning the instruction set from a semantic point of view I needed to figure out how to map the instruction and all relevant parameters to one or, if the instruction is more complex, two words of memory.

InstructionEncoding

To be memory-efficient not all bits encode the same kind of information in every instruction. For example not all instructions have a second parameter. The INC instruction just increases the value at the given address, while the SET instruction replaces the value at a given address with a new value that also needs to be supplied. If an instruction expects a second parameter the next word following the instruction is assumed to encode this parameter.

Here’s how it works in detail: The highest four bit of an instruction word always encode the instruction group. If a group consists of complex operations that require two operands only 2 bits are available to identify the group member. The 3rd bit (Param Flag) will encode whether the following parameter word, that encodes the second operand, is interpreted as a numeric value or as a pointer to an address. So there are 16 distinct groups possible that can either consist of 4 complex or 8 simple instructions. More then enough for our needs.

The next bit is the Target Flag and it tells the processor how the lower 8bits of the instruction word, that always encode the first operand of the operation, are to be interpreted. When it’s set the operation’s target is not an address but a numeral value or an address reached by indirection. Indirection means the passed address is not used directly as the target but read to get another address which is then used as a target.

When a second parameter is required the word after the instruction can either be interpreted as a 16 bit numeric value or an address that can be either supplied direct or by (recursive) indirection.

The Assembler

To verify the architecture I’ve build a prototype IDE to write and run core programs in. Basically you write a listing of a custom assembly language which is immediately compiled into a core’s memory. The core’s memory is visualized as colored blocks and also written out in HEX numbers. You can step through the code and watch the operations change the memory. The assembly language is pretty simple. There’s a 3-letter mnemonic assigned to each instruction followed by one or two operands. You can use labels to assign names to specific memory addresses and use those names instead of the actual address. Comments are also supported.

primegen IDE

This is the IDE where the cell-programms are written and tested. Click for an unscaled version!

It’s surprising what you can do with just 512 bytes. A program to calculate prime numbers that I wrote as an early test-case was only 17 words long, leaving more then enough room to store the primes. Another positive side-effect of such tiny memory is that you can visualize all the data with a couple hundred pixels. I found it pretty fascinating to watch the little cores work!

 

A core viewed in the IDE spending 255 energy on calculating primes. The last primes it finds is 19, then he runs out of energy.

A core viewed in the IDE spending 255 energy on calculating primes. The last primes it finds is 19, then he runs out of energy.

The Grid

The next step after implementing the virtual system that would represent a cell in the Cellular Automata was to put a bunch of them together in a grid so that they could start interacting.

 

primegen IDE

This is the GRID the simulation takes place. Each tick the core with the highest unbound energy get’s to execute an instruction. Energy is distributed cyclical until the energy budget is spend!

With the basic set of instructions in place I needed to figure out the rules by which cores (e.g. the cells) would be able to interact with their neighbours and how the system would decide which core would get to execute an instruction next. The straight-forward solution would be to just let them execute one instruction one after another and to add some instructions to read and write the memory of neighbouring cores.

But I’ve settled with something more complex. I had this grand vision that Core Society might evolve into a platform for competitive programming games. Like an arena where you’d try to write the perfect program to beat a challenge or compete with other programs for grid space. So I added an energy mechanic that governs the execution and interaction of programs. Energy is a precious resource that each core accumulates over time. It is spend when a core executes instructions. A core can spend energy to shield itself, preventing adjacent cores from writing into it’s memory. It can also reserve an amount of energy that when released allows the core to execute a number of instructions uninterrupted. There is an instructions that allows you to transfer energy into a neighbouring core. And one to drain energy from it, respectively.

So with a few bytes a core can be programmed to copy it’s program to neighbouring cores. Once the program has spread over multiple cores those can work together to establish dominance of the grid against competing programs or fulfil whatever goals they were programmed to.

Last but not least – to facilitate the creation of scenarios – I added an instruction to raise and one to lower a global score. It serves as a metric to gauge efficiency of a solution to a scenario. Obviously the more excess energy you can spend on raising the score the better the final score will be. The initial board state could involve some cores that lower the score constantly so a high scoring solution would need to be very efficient in gaining control of these cores as fast as possible.

In this demonstration the green core is calculating primes, supported by the grey cores who provide it with energy! The blue cores try to prevent the red core from taking over the board by spending their energy to keep a shield up and supporting neighbours.

The green core is calculating primes, supported by the grey cores who provide it with energy! The blue cores try to prevent the red core from taking over the board by spending their energy to keep a shield up and supporting neighbours.

Give it a try!

If you’d like to see more than animated gifs I’d suggest you check out Core Society’s Github repository or download it’s content as a ZIP. It includes a precompiled executable, full source-code, documentation and a couple of scenarios including reference solutions.

Getting Started

  • You might need to install the .Net Framework 4.5 redistributables if you don’t have them installed.
  • Then just start the prebuild executable (CoreSociety.exe) or build it yourself with Visual Studio 2013. (Should run on Mono, too)
  • Click on the top left icon to open a scenario. A click on the ‘Play’ Icon starts the simulation.
  • You can click on a listing in the ‘Deck’ to open the IDE window that allows you to modify the listing. The changes will automatically apply to the initial state of all cores that show the listing’s color.
  • To learn more about the instruction set and how to program a core have a look at the Documenation.
  • If you’ve got questions and can’t find answers in the docs just comment below!
Mai 12 13

Minimal Bitcoin Miner in C#

by pixelpracht

8654192646_ccd2b22d07_o

Bitcoin was introduced in 2009 as a digital, decentralized currency. In the Bitcoin network there’s no central authority. Instead a public ledger of every transaction is validated and shared using peer-to-peer technology. Recently there was a surprising spike in the value of Bitcoins, making me curious to find out more of the system behind.

Bitcoin transactions are grouped and confirmed by encoding them in a block and adding it to a blockchain that is protected by strong cryptography. The cryptographic calculations aren’t provided by a central authority – instead anyone with capable hardware and mining software installed can provide computational labor to help solve the next block.

When a block is solved the successful miner (or mining pool) earns a reward in Bitcoins. (Currently 25 BTC – worth about 2500€) This is quite an incentive and so Bitcoin mining evolved into a very competitive market. Advanced mining software uses GPU’s to compute hashes a hundred times faster then possible on CPUs and there is even dedicated hardware with hashrates in the Giga-range and a better Hashrate/power-consumption ratio. Nonetheless incredible amounts of electricity (~1 GW/h) are wasted on mining Bitcoins which is basically spend calculating zillion’s of SHA256 hashes. Currently the combined network of miners calculates roughly 80.000.000.000.000 Hashes per second. Crazy stuff.

Miniminer

If you are trying to understand how Bitcoin mining software works but can’t find a reference implementation that is minimal and easy to understand (like me, 2 days ago) here’s my contribution:

btcminer

Miniminer is a simple CPU based Bitcoin Miner in C#. Only about 300 lines of code but fully functional, open source and uploaded on Github.

It uses the basic GETWORK protocol to connect to pools and mines at ~400Kh/s a second. That’s not fast enough for serious mining but enough to find a valid share eventually, submit it to the pool server and be happy about the fact that you understand every single line of code that made it happen!

Apr 28 13

Grove Script

by pixelpracht

gs_header

In the past months I’ve been reading up on procedural generation and emergence. My reasearch led me to develop a scripting language to be able to interactively ‘program’ L-system based fractal animations. It’s meant to be easy to pick up and modify, so it might be a nice sandbox for people that would like to learn to program or just want to engage in a session of creative coding. It’s quite fun once you’ve gotten the hang of it!

L-Systems

I think every programmer has dabbled with Fractals at one point. I remember generating Mandelbrot images on the very first computer my family owned.

Last fall Josh Parnell pitched his project Limit Theory on Kickstarter. His claims sounded way too ambitious for a one-man project, yet he raised more then 350% of the funding goal. So why did people trust him to pull it off? His pitch video was featuring quite some cutting-edge procedural generation technology.

This project rekindled my interest on that topic. I found a pdf copy of the book “Algorithmic Beauty of Plants” and was intrigued. The basic concept of L-System is simple enough. It’s grammar based rewriting system, a theoretical framework for modelling the development of simple multicellular organisms, introduced 1968 by the Hungarian theoretical biologist and botanist Aristid Lindenmayer.

But the above book introduced me to more advanced variants of L-Systems, adding context sensitivity, randomness and parameters. The examples were quite impressive and my urge to experiment with it irresistible.

Usually L-Systems define a bunch of production rules that operate on strings of symbols. When a rule is applied it replaces a sequence of symbols in the string (the predecessor) by a different, typically more complex sequence (the successor). The more iterations you perform the longer and more complex the string of symbols becomes. Symbols are statically defined to do a very specific thing like rotating or moving a cursor to generate a rendering.

KochTurtleAnim

 

Imagine the above string of symbols were generated using L-Systems. F is defined to move forward, + to rotate left and – to rotate right.

This method of programming line based drawings by controlling a cursor through commands that move and rotate it relative to it’s current state is called Turtle Graphics. The programming language Logo designed for educational use first introduced it and it’s quite intuitive to grasp.

Grove Script

My experiments turned into a little scripting language to interactively ‘program’ L-system based fractals, called Grove Script. At first it seems like it does nothing special as it is based on the concept of using turtle graphics in conjunction with L-systems, too.

But, instead of statically defining an alphabet of symbols that the productions operate upon and that have a specific meaning when executed, in Grove Script the symbols refer to executable code. The key idea is that instead of programming everything like in your typical programming language or forfeiting the flexibility of procedural languages completely like in your typical rewriting system Grove Script brings both worlds together. It allows you to use rewriting rules to create complex sequences of procedure calls by applying simple rules to equally simple axioms.

And unlike most other script- or programming languages where writing and running the program are separate steps Grove Script is aiming to allow users to develop fractal renderings and animations in an fun, intuitive process: There’s no compiling or restarting required when you change the code. You get instant feedback on your changes. I let Bret Victor explain why that’s a good thing!

ss01

 

You’re free to use your favorite Text Editor to write Grove Script. But when the script you’re editing is running in the interpreter every changes you make will be instantly incorporated.

Productions

The most basic productions just replace a token with a with some replacement whenever it is encountered. Rules need not be simple, though. Grove Script supports:

  • Context Sensitivity – Rule can require multiple tokens to be encountered in order.
  • Parametric Tokens – Tokens can have a parameters attached.
  • Conditions – Production trigger only if the supplied boolean expression evaluates to ‘true’
  • Consequences – Rule-specific code can be executed whenever a production triggers. Great to raise some counter variables.

Instruction Set

As mentioned before I use the concept of Turtle Graphics in Grove Script. All pixels are drawn by the turtle, controlled by only a handful of straightforward commands. Besides commands to modify the turtles spatial state you can set the size, transparency and color of the line. That’s it.

There are other commands to save and restore the interpreter state on a stack – very important to generate branching structures. There’s an instruction to call subroutines or a list of subroutines. And of course those subroutine-lists are ideal to be generated with L-System-inspired productions. So you got a command to initialize a list with the axiom and one to apply a ruleset. Last but not least you can define variables through commands that can be used in algebraic and boolean expressions.

But all in all there are only a dozen commands you have to familiarize yourself with to unlock the full potential of Grove Script.

Macros

I found a way to handle all the usual control flow (conditional branching, loops) with 3 basic instructions that operate on code blocks marked using Tab indentation.

  • Repeat - Continue with the first of all previous instructions of the same block depth
  • Break –  Skip all following instructions of the same or greater block depth
  • Gate - When condition evaluates to false, skip all following commands of the same or greater depth, except other Gate instructions op’s of the same depth.

Using these control-flow instructions to the desired effect is a little counter-intuitive though. So to improve the usability I added macros that will unfold to the required operations when the interpreter parses the script. Macros blend well with instructions and improve readability. Being able to write code that uses familiar words like “while, until, repeat, if, else” hopefully appeals to people with and without programming background alike.

Thanks to macros you can write this…

repeat 4
	out 'foo

… which unfolds to this…

set _c, 4
    gate _c > 0
    out 'foo
    set _c, _c-1
    repeat

Animations

The interpreter is written in C++ and quite fast. It renders even complex fractals that require thousands of instruction calls in real-time. If it’s possible to render 60 frames per second it would be a waste to never change the output. By using the time() function in expressions you can generate fractals that change over time and render animations. Another important function in that regard is random() which provides you with pseudo random numbers. You can use the time() to calculate the random seed to be used by the PRNG and suddenly the randomness is under your control. It took me a while to figure out how to use these elements in conjunction with productions but once you’ve gotten the hang of it it’s quite easy to script quite impressive looking vector graphics that let you forget that everything you see is rendered with the plain olde Turtle Graphics approach.

Demonstration

Excuse the poor quality. I just captured the interpreter with Hypercam instead of rendering to video directly.

To experience Grove Script the way it’s meant to be you should check out the Github repository. There’s a precompiled interpreter, full source-code, documentation, examples and a half finished tutorial! Feel encouraged to try it for yourself: Start Grove.exe. Drag&Drop some scripts from the example folder to the application. You can edit the script while Grove executes it and it will adapt at run time. If you’ve got questions and can’t find answers in the Docs or from the tutorial just comment below!

There’s no reason this shouldn’t compile on Mac or Linux. If you want to make your own build you need the Cinder Framework - the rest is just plain standard C++.

And if you just want to know how Grove Script looks like without having to download and run executables from the internet here’s the listing that generates the animation from the above video:

//set some basic constant parameters to be used in expressions

set speed, 15

set fadetime, 20
set budstep, 5
set maxage, 70
set leafAge, 5
//let the size of the structure adapt with window size
set step, 0.5*_height/480

//these variables change every frame because they all depend on the current
//value of the time() function.
//Thanks to the modulo operate age wraps around at maxage.
set age, (speed*time()) % maxage
set fade, min(1, (maxage-age) / fadetime)
out fade
set age, 2.5*age^0.55
set frac, max(0.01, frac(age))

//each structure should look unique. 
//So we use a different random seed for every new structure
out 'rnd seed, floor(time()*speed/(maxage))
shuffle floor(time()*speed/(maxage))

//init the turtle
dir random(360)
pos 0, step*(5*age-100)

//set the axiom of the 'plant' structre
seed plant, A(0,0,1)
//apply the ruleset 'r1' as often as the plant is old.
repeat age
	grow plant, r1

//fun info: how many buds were generated?
out budCnt

//render 'plant' twice - mirrored.
push
rotate -45
run plant
pop
rotate 135
run plant

//Now the Tokens that the L-System operates upon are defined.

//light blue tips of the structure
#A(age, angle)
run shade(leafAge/frac)
size frac^0.5
move step*frac^0.5, angle

//what becomes of A in the next cycle - a simple curve
#B(t, angle)
run shade(leafAge/(t+frac))
size (t+frac)^0.5
move step*(t+frac)^0.5, angle

//B becomes C when it's old enough to spawn leaves - does the same thing, though
#C(t, angle)
run B(t, angle)

//a growing leaf
#L(t, angle)
run leaf(t+frac, angle)

//a finalized leaf
#xL(t, angle)
run leaf(t, angle)

//a growing bud
#Y(t, spread, angle)
run bud(0.3*(t+frac), (spread+frac)*budstep, 0.1*angle)

//a finalized bud
#xY(t, spread, angle)
run bud(0.3*(t+frac), (spread+frac)*budstep, 0.1*angle)

//render a bud
#bud(len, angle, curve)
run budshade(1/len)
//left side
push
size 2*len^0.3
rotate angle
move len^0.3, angle
size len/2
rotate -90
move len, -angle
pop
//right side
push
size 2*len^0.3
rotate -angle
move len^0.3, -angle
size len
rotate 90
move len, angle
pop
//stem
rotate curve
move 0.55*len^0.7

//render a leaf
#leaf(len, ang)
size 1
rotate 10*ang*len^0.6
move 0.5*len^0.3
move len^0.6, ang*(60+len*5)
rotate 180 - ang*0.4*(60+len*10)
move 0.5*len^0.5, -ang*(len*5)
move 0.7*len^0.5, ang*(len*10)

//color buds
#budshade(alpha)
visible alpha * fade
rgb 1.0-0.8*alpha*alpha, 0.6*alpha+0.2, 0.2*alpha*alpha

//color rest
#shade(alpha)
visible alpha * fade
rgb 1.0-0.5*alpha, 0.5*alpha+0.5, 0.3*alpha

#[
push

#]
pop

//The rules are evaluated in the order of appearance. 
//rare rules appear first, if all are skipped the general case is usually to just grow the existing tokens
#r1
//BRANCHING
//higher c makes a vine less likely to bend or split but increases chance to spawn buds
A(i, j, c) : rnd(0, c) > 2+budCnt -> B(1, j) Y(0, 1, j)
	raise budCnt, 1
//if old enough there's a chance to split into 3
A(i, j, c) : i > c and rnd() < 0.5/c -> B(1, j) [ A(0, rnd(-40,-60), c+2) ] [ A(0, rnd(40,60), c+2) ] A(0, 0, c+1)
//...or into 2
A(i, j, c) : i > c and rnd() < 0.5/c -> B(1, j) [ A(0, rnd(-40,-60), c+1) ] A(0, rnd(40,60), c+1)
//or just change the direction
A(i, j, c) -> B(1, j) A(i+1, j+rnd(-90, 90)/(c+1), c)
//LEAVES
//based on the curvature leaves spawn on different sides
B(i, j) : j < 0 and i = leafAge-1 -> C(i+1, j) [ L(0, 1) ]
B(i, j) : j > 0 and i = leafAge-1 -> C(i+1, j) [ L(0, -1) ]
//GROWTH
//and of course everything needs to grows
Y(i, j, k) -> Y(i+1, j+1, k) xY(i, j, k) 
B(i, j) -> B(i+1, j)
C(i, j) -> C(i+1, j)
L(i, j) : i > 1+rnd(7) -> xL(i+1, j)
L(i, j) -> L(i+1, j)
Feb 16 13

How Today’s Videogames Miss Their Potential

by pixelpracht

The following wall of text contains musings on how todays video games are missing their potential. I analyzes the game industry’s focus on surface values and why that’s a bad thing. I argue that photo-realism isn’t required for immersion, and point out what is instead. I draw parallels to the state of AI research. I critizise a lack of investments in R&D and a stagnation despite video games being far from realising their potential. Read it!

I wanted to create games for the better part of my life. I enjoyed playing video games but what fascinated me was their yet unfulfilled potential. Hardware was growing more powerful at a staggering pace and games were the only kind of consumer oriented software that could motivate people to upgrade their systems every two years. It was this synergy between hardware and video games that allowed both industries to prosper. We were witnessing the birth of a new medium and I wanted to have a part in shaping it.

Moore’s law, the prediction made in 1965 that the complexity of integrated circuits would double every two years, remained surprisingly accurate for over half a century. The improved capabilities of electronic devices changed our lives profoundly. When I’m surfing the net with my smartphone, reading a book on my kindle, depending on amazon, google, wikipedia or the navigation assistant to solve my day-to-day problems I realize that technology has advanced way beyond what I could have imagined 15 years ago.
Video Games on the other hand turned into a disappointment. Not because my career plans failed – I have studied multimedia engineering and make my living in the games industry. But the product is not what 15-year-old me would have expected videogames to be like in 2013. Not remotely close. That’s a thought provoking revelation for someone that devotes most part of his waking hours to games.

read more…

Aug 5 12

Partial Occlusion Field-of-View

by pixelpracht

It seems like there are problems that I have to solve over and over again. Pathfinding is one of them. Another is visibility determination.

When you need to know if there’s an unobstructed line of sight between point A and point B you typically just cast a ray from A to B and see if it hits something. The set of all points that can be reached with an unobstructed line from A are referred to as A’s Field of View. Calculating it accurately would require an unlimited amount of rays, so you typically sacrifice accuracy by partitioning space in some discrete data structure. Grids, Trees, Pixels, Voxels… that kind of thing.

In this post I’ll focus on FoV calculations on square grids. Using a low resolution square grid is a pretty huge abstraction, and the whole point of this article is to show how to get some accuracy back without losing too much performance! :) But let’s start simple…

Calculate Field of View with Raycasting

As long as you only care for a Boolean result (is a square B visible from square A in the center) it’s straight forward to calculate using raycasting.
First of all consider how many rays you need: Just enough so that every field that could potentially be part of the Field of View is passed by at least one ray. So better define a max range or you would still have to cast an unlimited number of rays despite working on a low resolution square grid. Now select fields at maximum range so that those fields completely enclose your center. Draw lines to them and all closer fields are automatically tested too!

Calculate FoV by raycasting

Raycasting only towards fields at max-range reduces the algorithms complexity from O(n³) to O(n²). Thtat’s a huge difference! Imagine how the illustration would look like if all cells would be target of  a raycast…

For line-drawing you can just use Bresenham’s Line Algorithm. All clear fields covered by the line are part of the field of view. As soon as you hit an opaque field abort the raycast. It’s not part of the field of view and all other fields that follow along the direction of the ray are occluded by it.

There’s room for improvement

When I tried to use the raycasting approach to implement dynamic lighting for an isometric 2D game engine I found that I would have to find an improved algorithm to make it look great.

Consider an opaque field casting a shadow. A lot of fields will neither be fully lit nor fully in shadow. From light source’s point of view those fields are partially occluded! So I wanted my fields to specify their membership in the Field of View as a percentage (0 – 100%) and not just a Boolean (true or false).

The other limitation I hoped to overcome were integer coordinates. It didn’t suffice to say a light-source is on Field (x,y) – I wanted to specify the exact position on the field the light would come from. So I needed to be able to precisely define the center of my Field of View by using non-integer coordinates.

The Mission

I hope I managed to convey an idea of what problem the algorithm I’m about to present is trying to solve. To be sure let’s sum up the situation:

  • We operate on a square grid that partitions a 2D plane in equally sized fields.
  • Some squares are clear (will allow light to pass through) while others are opaque (will block light).
  • We define a point for which we want to calculate the Field of View.
  • We want to know the degree of occlusion for each square closer then the max range.

My research didn’t yield any established solution so I accepted the challenge to find a new one (or reinvent the wheel, who knows). Here’s a demonstration of my solution in action:

An interactive Flash Demonstration of Raycasting and Partial Occlusion based FoV

The full source of the demo is available under the MIT license. However, deducting an algorithm from poorly documented AS3 source code might be a little inconvenient if you want to understand the principles behind it. So I conclude the post with a explanation of the algorithm.

Partial Occlusion Field of View

It helps to think of center of the Field of View as a light source from which light is spreading equally in all directions. The spreading light will exit a square on the edges facing away from the light source and enter a neighbouring square. So for each square you can clearly identify one or two neighbouring cells that all the light is coming from.

Now all you have to do to know how much light is going through a particular square is to ask the relevant neighbours to describe their contribution. If a neighbouring square is opaque it will contribute no light. Otherwise it will contribute a portion of the light it itself received – which can be anything between zero and the width of the shared edge.

Circular sectors to represent beams of light

A simple way to express the “portion of light” is a circular sector. With the center known we just have to store two angles to define such a sector.

Two angle's are enough to describe light contribution as a circular sector.

This illustration shows how the light passing between two occluders can be described by specifying two angles: Alpha and Theta.

Due to the nature of the grid one sector (e.g. two angles) per neighbour is enough to describe it’s contribution! This is because occluders are always one square in size and thus the shadows they cast can’t be smaller then one square. With this method of describing lightbeams it’s easy to form unions and intersections of lightbeams, too. Unions, when we want to combine the light coming from two different neighbouring squares and Intersections when we constrain the result to the size of our current square.

The Algorithm

The algorithm can be summed up as follows:

FOR EACH non-blocking square:
	1: Identify all neighbours that are not opaque 
	   AND share an edge AND are closer to the center.
	2: Store the contribution from the first neighbour 
	   as circular sector A.
	3: IF a second neighbour exists: store the contribution 
	   from the second neighbour as circular sector B.
	4: Calculate a circular sector C that describes the maximum 
	   amount of light that could possibly pass the current square.
	5: Form the UNION of A and B and INTERSECT with C to receive D. 
	   D describes the amount of light actually passing the current square.
	   The smaller the ratio of D to C the bigger the occlusion.

The following illustration demonstrates the process on a square that happens to be half occluded. Only one neighbour contributes light, while the other is opaque.

This animated illustration shows how to calculate the degree of occlusion of a field based on two relevant neighbours.

This animated illustration shows how to calculate the occlusion of a square based on the occlusion of the two of it’s neighbours closer to the center.

Calculating a Squares circular Sector

But how to calculate a circular sector that describes the maximum width light beam that could go through a particular square (labeled ‘C’ in the above illustration)?
To find it you simply calculate alpha for all four corners of the square and pick the largest and the smallest angle to form the circular sector. The resulting sector will enclose the full square.
Two of the corners of each square define a circular sector that describes the maximum amount of light this square can receive.
The illustration shows that there’s a distinct pattern in what corners you would have to pick. So you don’t have to calculate four values and then discard two. Instead you can evaluate the sign of the x and y component of the vector pointing from lightsource to the square and to predict the correct ones. And if you’re going through the sourcecode of my reference implementation and notice these weird looking lines now you know what they do! dx and dy describe the position of the square in relation to the center and (x1, y1) and (x2, y2) are the corner’s that define the sector.

var sx:Number = (dx == 0) ? 0 : (dx < 0) ? -1 : 1;
var sy:Number = (dy == 0) ? 0 : (dy < 0) ? -1 : 1;
var ox:Number = (sy == 0) ? sx : 0
var oy:Number = (sx == 0) ? sy : 0
var x1:Number = x + 0.5 * (1 - sy + ox);
var y1:Number = y + 0.5 * (1 + sx + oy);
var x2:Number = x + 0.5 * (1 + sy + ox);
var y2:Number = y + 0.5 * (1 - sx + oy);

Recursion vs Iteration

Another difficulty worth mentioning is that to calculate a particular square’s occlusion the occlusion of it’s relevant neighbours must be known first. How can we make sure they are already calculated?

The obvious approach would be to use recursion, which means that whenever we query the sector of a cell that hasn’t yet been evaluated it’s sector will be calculated before the query returns the correct value. To calculate the FoV (e.g. the degree of occlusion of all squares within max-range of the center) using recursion, trigger the evaluation in the four corners of the bounding rectangle. All squares enclosed by the rectangle will be visited before the calls return. There’s just no other way for the recursion to terminate than by reaching the center tile for which the occlusion is always known. This is when the evaluation-method ceases to calling itself on neighboring cells and begins to return values. It’s a good idea to implement it so that each cell is only calculated the first time it is queried and cache the result. That way subsequent queries don’t incur an unnecessary reevaluation.

There’s nothing wrong with that approach as long the stack space suffices. But if this could be a problem it’s also possible to devise an iterator pattern that will visit squares in such an order that all relevant neighbours have already been calculated.

It's possible to iterate over the grid, starting from the center, that squares are calculated in just the right order.

The demonstration I’ve linked implements both the recursive and the iterator based version of the algorithm as well as some raycasting based FoV as a reference. Here’s the source code of the demo! It’s released under the MIT License and includes all relevant dependencies. I hope you enjoyed the lengthy read, feel free to post questions or comments! :)

Jun 25 11

Effortless Animations in Flash

by pixelpracht

You want to display an Animation in Flash. The animation consists of a number of frames, each is saved as a small image somewhere on your HD. You wonder: What is the best way to do it?

If this scenario applies to you, here is one possible answer that does not require the Flash IDE or any full blown framework. It involves a free tool and a couple of small classes (also free) that should be easy to integrate with your project.

Step 1: Create an Image Atlas out of the individual images.

What is an Image Atlas? A big image containing many small ones side by side. Also known as Texture Atlas or Sprite Sheet.

Why do we want one? Because it’s easier to embedd one image file than many. :) It’s also reducing the size of the SWF because it compresses better then individual images. We’ll use a tool called Texture Atlas Generator to create our image atlas.

Create an Image Atlas

  • Check “PNG” so an image atlas is generated.
  • Check “Pack Transparent”. This will pack the images as small as possible by removing empty pixels. The 17 original images have a resolution of 128 x 64 each. The image atlas has a resolution of 400 x 182. So the same animation takes up roughly half the amount of pixels.
  • Check “TXA” to generate another file that stores meta information e.g. what frame went where in the image atlas.
  • All other options can be un-checked.

If everything works you end up with 2 new files: A TXA file and a PNG image atlas that should look roughly like this:

Step 2: Embedd the image atlas and the meta data in your Flash.

As described in my other blog post you can embed all kind of data in a SWF.

[Embed(source = "../Resources/MoikSlashAni.txa", mimeType="application/octet-stream")]
private static var animationTxaResource:Class;
[Embed(source="../Resources/MoikSlashAni.png")]
public static var animationPngResource:Class;

Step 3: Use the ImageAtlas and AnimatedBitmap class to display the embedded animation.

In the Code Sample you’ll find a couple of classes from my library of custom code. Feel free to use them for your own projects. In this tutorial we’ll use the ImageAtlas class to unpack the individual frames from the image atlas and the AnimatedBitmap class (that behaves just like a normal Bitmap class with some extra functionality) that we will add to the stage to display the animation.

//unpack the embedded resources
var imageData:BitmapData = (new animPngResource as Bitmap).bitmapData;
var metaData:ByteArray = new animTxaResource;
//create an image Atlas storing our animation
var imageAtlas:ImageAtlas = new ImageAtlas(imageData, metaData);
//create an animated bitmap...
var animation:AnimatedBitmap = new AnimatedBitmap(imageAtlas);
//...make it play a sequence of frames...
animation.assignFramesByName("MoikSlashAni_0000", 16);
animation.play();
//and add it to the stage
stage.addChild(animation);

It’s as easy as that! :)

Sorry, either Adobe flash is not installed or you do not have it enabled

Download the sample project here: ImageAtlasTutorial

Step 4: Make it your own.

You probably want to have a look in the source code (to learn what AnimatedBitmap can do) and wrap the functionality with your own code. The AnimatedBitmap has a couple of functions that make it flexible to use.

  • You can toggle wheter you want the animation to loop.
  • You can set the current frame yourself or let it update automatically (by calling the play method).
  • You can reassign new frames so you could store a lot of different animations in the same image atlas. Running, Slash, Jump and so on. So one image atlas and one animated bitmap could be enough to display a full character. You just have to change the sequence of frames you want currently played e.g. assignFramesByName("jump001", 14) to play 14 frames of jumping.
  • You can pass a callback function as a parameter to the play that is called when the last frame is reached (and the animation isn’t looping) to play a follow up animtion.

I hope this tutorial has given you something to get started with! Feel free to post further questions in the comments. :)

Copyright Notice: The sprite I used in the demonstration is public domain and consider the code used in the sample as released under the MIT license.

Jun 18 11

Finally blogging on my own webspace!

by pixelpracht

I have a domain, webspace and unlimitted traffic… there is no good reason beyond lazyness to not host my own blog.

lulz image

So, no more forwarding to wordpress.com for I found the time and bravery to make the move! :) I hope everything still works. In the days to come I’ll explore the new customization options that I now have.

Thanks to Barry from GamesChart.com for the nudge! :)

Mai 31 11

An asset pipeline in Flash. How we do it!

by pixelpracht
Here’s a real-world example of how we handle the assets in our current flash game project at my company. It’s a rather big browser game that uses several megabytes of resources (XML meta-data, PNG & JPEG Images, Sounds and Movieclips) that we have to deploy to the serves and load into the client when needed. In such a scenario managing the dependancies between the code and the assets can be quite challenging and timeconsuming.
This post is a little advanced and you should be allready familiar with embedding and dynamically loading resources in Flash.

The Challenge

The content creators create assets file by file and save them to a folder structure. Thanks to the magics of version control systems (SVN in our case) these ever growing amount of assets is synced between all collaborators. But someone has to make sure everything is available at runtime when the application needs it.
To embedd all resources using the [Embed] tag in the game’s source code would cause several problems: The size of the SWF would be huge as all assets that are potentially needed would have to be included. Each embedded resource has to be added when you build your project so compile times would explode. And as we’re using FlashBuilder we would have to write some lines of code for each embedd asset. But who’s going to maintain that? Gamedesigners and artists change, add and delete files to the repository but they don’t have the means (and inclination) to edit source code, so who’s keeping it synced?
A contrary approach is to put all resource files on server and load them at run time. But different types of resources require different code to load, if you require many resources at once you have to open one connection for each resource (adding a lot of overhead) and you have to keep track of multiple loaders. Lastly many resource files are compressed efficiently when embedded into a SWF and it’s sad to miss out on that kind of optimization.

Our Solution

So we’ve decided to setup an asset pipeline that converts our assets into a format that is more optimized to the needs of the runtime environment. Such a system is a common thing in “real” game development but unusual for flash game development – maybe because traditionally Flash applications are authored in an IDE that has build in functionality for asset management.
Flash Builder doesn’t have anything like that but luckily it’s based on Eclipse, a software development environment with an extensible plugin system. One plugin (part of the “Eclipse Java Development Tools”) adds support for “Apache Ant” build scripts to Flash Builder. Ant is a tool for automating software build processes. What to build with it? SWF’s that contain only assets as embedded resources, get uploaded to the server alongside our game SWF and when resource are needed the appropriate resource-packs are loaded at runtime to make the embedded assets available to Flash.
But our Ant script does a lot more then just invoke a compiler to build some AS3 files. It also generates these files on the fly. So the manual work required to embedd all our assets into some dozens of resource packs reduces to clicking a button.

The Details

So, what happens when we click?
Mrz 18 11

Matrix Magic

by pixelpracht
For a flash game project at work I needed to setup a flash.geom.Matrix instance to transform points from cartesian space into an isometric perspective.
To get it working I had to answer three questions:

1. How do the properties a, b, c, d, tx and ty map to the coefficients of the matrix?

Mapping of coefficeints in the Flash Matrix class

2. How to use the matrix to transform a Point?

The Matrix class provides two methods to transform a point.
  • deltaTransformPoint transforms a point without applying the translation.
x' = a * x + c * y;
y' = b * x + d * y;
  • transformPoint transforms a point including the translation.
x' = a * x + c * y + tx;
y' = b * x + d * y + ty;
If you know the mapping of coefficients and a bit of computer graphics that’s the expected behavior. To be able to multiply a 2D vector with a 3D matrix you introduce an extra 1 as 3rd component. Then perform a matrix multiplication. That way the transformation is added to x and y after the rotation/sheering is done.

3. How to setup the matrix to get the desired transformation?

If none of the provided methods fits your need (like in my case) you can set the correct values for the coefficients directly. That’s easier then it sounds. Look at the basis vectors of your source coordinate system and imagine (or calculate) how they look like after the transformation. Assign coefficients a the x and b the y value of the transformed x-axis. Assign the x and y value of the transformed y-axis to c and d.
If you need it you can set tx and ty to move the transformed points relative to the new origin.

And why do I blog about math, now?

Adobe managed to have a incorrect illustration of the layout of the coefficients in the german doc (english docs happen to be correct!) and the inscrutable description of the transformPoint and deltaTransformPoint methods lacked the specifics I needed. Let’s hope this post and the power of google will help those, that seek the same kind of knowledge!
Jan 7 11

Simulating Fluids in Flash

by pixelpracht

A thread in the FGL forums got me interested in the simulation of fluids. Of course this topic has seen quite extensive coverage in academic papers but getting it to work in the limitted environment of Flash provides some intersting challenges. Interesting enough for me to give it a try over my christmas break.

I loosely followed a paper Particle-based Viscoelastic Fluid Simulation to implement a sandbox application that simulates and renders a small blob of fluid and allows you to tinker with various parameters. Click the image below and give it a try!

The fluid is represented as a set of ~400 particles that interact with their neighbours and such seem to form a coherent entity.

For rendering I use an oldschool approach: Each particle is represented as a metaball. For a given pixel the force function of every metaball is evaluated, summed up and the final sum is compared with a threshold value to decide whether the pixel is inside or outside of the fluid.

To be able to do that in realtime I precalculate the values that a forcefunction yields for pixels within a relevant range of the particles center as a Bitmap.
Fluid Rendering - Illustration #1
These tiny images (about 40×40 pixels) are rendered at the position of the particle with additive blendmode.
Fluid Rendering - Illustration #2
Now I can decide with a single lookup in the resulting Bitmap if a pixel belongs to the surface or not. That evaluation step is done by a pixel bender shader and instead of just working with a boolean state (inside or outside) I sample a gradient based on the accumulated force value.
Fluid Rendering - Illustration #3
This approach allows me to achieve a variety of styles in the visual appearance of the simulated fluid by just using different gradients.

This technique would unfold it’s real potential if you could use hardware acceleration to accumulate the field functions because that’s a bottle neck in the current implementation.

Not sure if that stuff has any practical relevance, anyway. But the SWF is only 32kb big so people that are into demos (32k, 64k) could maybe make something fancy out of it. :) Here’s the source code released under the MIT license.