Showing posts with label blender. Show all posts
Showing posts with label blender. Show all posts

Tuesday, 16 February 2016

Looking at the structure inside cells

How complex and structured is the inside of a cell? It's hard to imagine, but the internal organisation of cells is typically precisely controlled by molecular skeletons and scaffolds, giving cells the shape they need to function.

We can discover the 3D organisation of the inside of cells using electron tomography; a process where you capture a series of images with an electron microscope, with the sample tilted at a slightly different angle for each image. This can then be used to calculate the 3D shape of the sample, using the same maths as for an X-ray CT scan.

Leishmania parasites are exquisitely structured. While they are only 2 micrometres wide (100 would fit across a human hair) they have a precise internal organisation which they faithfully replicate each time they divide. One of the distinctive parts of this organisation is the flagellar pocket, where the cell membrane folds in on itself at the base of the whip-like flagellum that the cell uses to swim.

In my latest paper, "Flagellar pocket restructuring through the Leishmania life cycle involves a discrete flagellum attachment zone", I used electron tomography to reconstruct the three-dimensional organisation of the Leishmania flagellar pocket. The structure in this area of the cell is incredible, and the journal picked a rendering of it for the cover image.


Volume covered in this 3D reconstruction is only 3 by 2 by 1 micrometres, about the size of a typical bacterial cell, but has enormous complexity. I have shown the microtubules (which make up most of the cytoskeleton) in red and membranes in blue. Each microtubule is only about 5 molecules wide, and is about 10,000 times narrower than a human hair! Some other specialised parts of the cytoskeleton are in green.

You can download the paper for free here to take a look at the structures in this area of the cell in more detail.

Software used:
IMod: Electron tomography structure
Blender: Tidying and rendering of the 3D structure

Wednesday, 7 January 2015

Trypanosome Lego

Trypanosomes and Leishmania are the two tropical parasites that I do most of my research on. These cells seem to have a lot of modularity in controlling their shape, and have quite a lot of flexibility in reshuffling where particular structures (made up of many organelles) sit within the cell.

The base of the flagellum, the whip-like tail which the cell uses to swim, is also the site where the cell takes up material from its environment (essentially its mouth) and is linked with the Golgi apparatus (an important organelle in protein processing) and the mitochondrion (the powerhouse of the cell) and links to the mitochondrial DNA. It turns out reducing the level of just one protein in the cell can cause this entire complex structure to shift its position.

Cells are not quite as flexible as Lego, but it is still impressive that a single protein can have such a large effect on the organisation of a cell.

Monday, 29 December 2014

Forgotten Futures - New York


What if cities looked like this? The 1920s view of cities of the future was glorious; huge buildings towering into the sky, multi-layer roads, rail and pavements, airships and aircraft, and the bold geometry of art deco.

Sadly this world never came into existence. But what if it had? What would 1950s New York have looked like? I re-imagined this forgotten future based on the view from the Empire State building towards the Grand Central station and the Chrysler building in a world where the 1920s vision of the future came to be.


Software used:
Blender: 3D modeling, texturing, rendering, compositing.
Paint.NET: Final image tweaks.
Inkscape: Texture detailing.

Building a forgotten future; 7 days of 3D modelling in 20 seconds:


Tuesday, 20 May 2014

Jurassic Wedding



You will have seen the instant internet classic of a dinosaur crashing a wedding... I got married this year and just had to do the same. Fortunately my wife agreed! I am a biochemist, but cloning a dinosaur to crash my wedding would have been a bit of a challenge, so I had to stick to the graphics approach instead.

So how do you get a dinosaur to crash your wedding?

Step 1: Recruit an understanding wedding photographer and guests for a quick running photoshoot. Make sure everyone is screaming and staring at something imaginary!


Step 2: Recruit a dinosaur. A virtual one will do, and I used this excellent freely available Tyrannosaurus rex model for blender.



Step 3: Get some dynamic posing going on! Most 3D graphics software uses a system called 'rigging' to add bones to a 3D model to make it poseable. This is exactly what I did, and with 17 bones (three for each leg, seven for the tail, two for the body and neck and two for the head and jaw) I made our pet T. rex poseable.

 The bone system

The posed result

Step 4: Get the T. rex into the scene. By grabbing the EXIF data from the running photo I found that it was shot with a 70mm focal length lens. By setting up a matching camera in blender and tweaking its position I made the camera position match perspective between the view of the T. rex and the running people.


Step 5: Making the dino look good. A 3D model is just a mesh of points in 3D space. To get it looking good texturing and lighting need to be added. For this project they also need to match the photo. Matching the lighting is particularly important, and I used Google maps and the time the photo was taken to match up where the sun was as accurately as possible.

The T. rex wireframe

Textured with a flat grey texture.



With a detail bump texture and accurate lighting.

With colours, detail texture and lighting.


 Step 6: Layering it all together. To fit into the scene the dinosaur must sit into the picture in 3D; in front of some object and behind others. To do this I just made a copy of some of the guests which need to sit in front of the dinosaur and carefully cut around them. The final result is then just layering the pictures together.



So there you go! 6 steps to make your own wedding dinosaur disaster photo!


Software used:
Blender: 3D modelling and rendering.
Paint.NET: Final layering of the image.

Tuesday, 16 July 2013

Micro 3D Scanning - 1 Focal Depth

3D scanning is a very powerful tool, and it's value isn't limited to the objects and scenes you interact with in everyday life. The ability to precisely determine the 3D shape of tiny (even microscopic) objects can also be really useful.

 The 3D reconstructed shape of a tiny (0.8 by 0.3 mm) surface mount resistor on a printed circuit board. This was made using only a microscope; no fancy laser scanning required!

3D scanning through a microscope is a bit different to normal 3D scanning; mostly because when you look down a microscope at an object it looks very different to what you might expect from day-to-day life. The most immediately obvious effect is that out of focus areas are very out of focus, often to the point where you can barely see what is there. This effect comes down to the angle over which light is collected by the lens capturing the image; your eye or a camera lens in everyday life, or an objective lens when using a microscope.

Three images of the surface mount resistor. The three pictures are taken at different focus distances so different parts of the image are clear and others blurred. The blurred parts are very blurred!

In every-day-life when using a camera or your eyes distance from the lens to the object is normally long, it may be several metres or more. As a result the camera/your eye only collects light over a small of angle, often less than one degree. In comparison microscopes collect light from an extremely large range of angles, often up to 45 degrees. The angle must be this large because the objective lens sits so close to the sample. A wider angle of light collection makes out of focus objects appear more blurred. In photography terms the angle of light collection is related to the f-number, and large f-numbers (which have a large angle of light collection) famously have very blurred out of focus portions of the image.

The upshot of this is that in a microscope image the in focus parts of an image are those which lie very near (often within a few micrometers) to the focal plane. It is quite easy to automatically detect in focus parts of an image by using local image contrast (this is actually how autofocus works in many cameras) to map which parts of a microscope image are perfectly in focus.

In this series of images the most in-focus one is image 6 because it has the highest local contrast...

 ... using edge detection to emphasise local contrast in the image really highlights which one is perfectly in focus.

In this series of images the most in-focus one is image 55 instead.

The trick for focus 3D scanning down a microscope is taking the ability to detect which parts of an image are in focus, and using this to reconstruct the 3D shape of the sample. Going to the 3D scan is actually really easy:
  1. Capture a series of images with the focus set to different distances.
  2. Map which parts of each of these images are perfectly in focus.
  3. Translate this back to the focus distance used to capture the image.
This concept is very simple; if you know one part of an object is perfectly in focus when the focus distance is set to 1mm, that means it is positioned exactly 1mm from the lens. If a different part is perfectly in focus when the focus distance is 2mm, then it must be positioned 2mm from the lens. Simple!

It may be a simple idea, but this method gives a high quality 3D reconstruction of the object.

The reconstructed 3D shape of the resistor, using 60 images focused 0.01mm apart, mapped to a depth map image. The lighter bits stick out more from the surface, and the darker bits stick out less.


Using the depth map to reconstruct the resistor reconstructed in full colour in 3D! Pretty cool for something less than 1 mm long...

Does that seem impressive? Then check out the videos:


A video of the original focus series of images captured of the resistor.



The reconstructed 3D shape.



A 3D view of the resistor, fully textured.


This approach is, roughly speaking, how most 3D light microscopy is done in biological and medical research. It is very common practice to capture a focal series like this (often called a "z stack") to get this 3D information from the sample. 3D imaging is most useful in very thick samples where you want to be able to analyse the structure in all three dimensions, an example might be analysing the structure of a tumour. My research on Leishmania parasites inside white blood cells uses this approach a lot too. The scanning confocal fluorescence microscope was actually designed to maximise the value of this 3D effect by not only blurring out of focus parts of the image, but also eliminating the light all together by blocking it from reaching the camera.

Software used:
ImageJ: Image analysis.
Blender: 3D viewing and rendering.

Monday, 7 January 2013

OpenTTD 32bpp Part 4 - The Finished Product

By using 3D graphics and automation I was able to make a complete complete, high resolution, graphics replacement for OpenTTD almost completely on my own...


Making this graphics set was still a hell of a job though:

The total time I spent in Blender to make the graphics was 135 hours to make 391 blender files.
To render all these Blender files into sprites took 14.16 processor hours to produce 69931 images.
The final downloadable graphics files are 273 Mb and have been downloaded 24123 times.
The total size of all 146280 source files is 2349 Mb.

You can find out more about OpenTTD at http://www.openttd.org/ and more about this high resolution base graphics set (called zBase) here and here.


Software used:
Blender: 3D modelling and rendering.
ImageJ: Sprite post-processing and managing.
Python: Computer usage tracking.
OpenTTD: The game!

OpenTTD 32bpp Part 3 - Automation

3D software makes it quick and easy to make high quality textured and shaded sprites, but it is not just limited to still images; 3D software is normally designed for animation. Using the animation tools in Blender I could massively simplify producing sprites for OpenTTD.

The graphics for different trucks are a great example.


There are 3 different generations of truck graphics (old, current and futuristic), 16 different cargo types and loaded and unloaded graphics; that's 96 different combinations! Designing the graphics in 3D allows the truck to be broken into chunks; the truck body, the trailer and the cargo.


A screenshot of the 3D source for the truck graphics.

With this setup making the various combinations of truck body and cargo is simple; different animation frames just have different trucks and cargoes positioned in front of the camera. It does need a bit of geometry to get all the angles perfect though:
OpenTTD's sprites have the front left and front right sides at 26.56805 degrees from horizontal.

In 3D it is simpler, the camera is at an elevation of 30 degrees.


Each truck has 8 sprites for the 8 different directions it can drive in. Creating these 8 different images using the animation tools in 3D software is also easy, just rotate the truck 360 degrees in front of the camera over 8 frames.

Finally a few extra images are needed to complete the set of sprites required. One of these is the mask sprite to show which bits of the truck should be coloured using the company colour:


Other ones are the different sprite sizes for different zoom levels (256px, 128px and 64px).

Handily these types of outputs are also useful for animators. This tangle of image processing nodes in blender automatically generates and saves these different images:




This level of automation generates all the sprites OpenTTD needs with the minimum of effort. With a little bit of coding these sprites are now recognised by OpenTTD and can be used.

Continued here: OpenTTD 32bpp Part 4 - The Finished Product


Software used:
Blender: 3D modelling and rendering.
OpenTTD: The game!

OpenTTD 32bpp Part 2 - Moving to 3D

The original OpenTTD graphics were 8-bit. This means that all the images were made up only of colours taken from a palette of 256 different colours. This image:
Is only made up from colours from this palette:


 For example you can take all the pixels from the building which are the grey shade indicated in the palette:

An 8-bit image format was used because it was much faster for older computers to display. While the images are simpler it actually makes the graphics harder to draw for a person; every single pixel has to be picked carefully. It isn't possible to use normal photoshop techniques like altering brightness or contrast or reshading the graphics which makes things difficult. Imagine having carefully drawn a brick wall. To use it on the shadowed side of the building you are forced to completely redraw it.

In contrast 32bpp images are easy to make, especially using 3D modeling and rendering. The fact that OpenTTD can now use 32bpp graphics is therefore a big advantage! Once a building is designed in 3D (modelled) the rendering process (converting it into an image) calculates all lighting, shadows, shading etc. This makes it very quick to generate a realistic looking building because the computer does a lot of the work for you.
 
The building as modelled in Blender, the lamps for lighting are the dotted sun-shaped objects on the right.

The rendered image with accurate shadows and shading.


In the same way that 3D modelling and rendering make lighting and shading simple they also make texturing simple.

In the old 8-bit graphics textures like brick walls had to be hand-drawn, brick by brick. With some smart texture design and 3D rendering even hard-to-draw materials can be rendered quickly and accurately. Like with lighting and shading this is far quicker than drawing the texture by hand.
Brick texture on some simple objects

Brick texture on a really complicated object!

With lighting, shading, and texturing simplified by 3D modelling and rendering this actually makes the new, high resolution graphics easier and quicker to make than the old ones. The modelling is still a bit tedious though...

Continued here: OpenTTD 32bpp Part 3 - Automation

Software used:
Blender: 3D modelling and rendering.
OpenTTD: The game!

OpenTTD 32bpp Part 1 - Making a Massive Graphics Update

If you didn't know OpenTTD is an open source remake of the classic "Transport Tycoon Deluxe" by Chris Sawer, the maker of Rollercoaster Tycoon. It still has a huge cult following! Unfortunately the original Transport Tycoon graphics, which can be used in OpenTTD, look slightly dated (1994/5)...

 The original graphics

Originally OpenTTD relied on the graphics from the original game. In an effort to make the game fully free to play the graphics were redrawn from scratch, a massive project that took nearly 3 years to finish. Unfortunately these were subject to the same technical limitations as the original Transport Tycoon graphics:
 The free graphics replacement

The biggest problem with these graphics is their size. The game was originally designed to be played on a screen only 640x480 pixels in size. Modern screens (like an HD display) are nearly 3 times wider/taller with 9 times the area. In an effort to make the game more playable extra zoom levels were added: now the view can be zoomed in 4x further, but the result is not pretty:
 I can see pixels!

The solution? Redraw all the graphics. Again. The result is totally worth it though:
One retina display-safe set of graphics.

OpenTTD uses a (now quite outdated) method for displaying game graphics in which every in-game object is a single image called a sprite, like this:
 
One construction stage of one building.

In some ways this makes a complete, high resolution, graphics replacement is simple; all you have to do is draw the big brother of each sprite:
The same sprite, 4 times larger and with better colour depth and full transparency.

While this this sounds simple it is still a mammoth task. There are currently 11949 sprites (11949 separate images) required for OpenTTD. These cover all the objects, buildings, vehicles, industries, etc. to cover all the four different world environments you can play in... Some of these don't need to be replaced (like the sprites used for the fonts) but it is still a huge number of images.

So could I possibly go about doing this? In short, automation and 3D rendering.

Continued here: OpenTTD 32bpp Part 2 - Moving to 3D

Software featured:
OpenTTD: The game!

Saturday, 26 May 2012

3DQR

Emart in Korea just came up with something amazing; a sundial-like sculpture where the shadows make, between 12 and 1, a QR code you can scan to get info about special offers. I had to have a go myself!
This is a 3D rendering of a 3D shape which, when the light is from the right angle, makes a QR code which encodes a link to this blog. You can see a bigger version here.


QR codes are the leading 2D barcode method for encoding information and can be scanned by many phones. A simple grid of black and white squares encodes the data:
This is the QR code that encodes a link to this blog:
To work out the 3D shape that would make shadows which look like the QR code is actually quite simple. By following three rules each square in the QR code can be converted from black/white to a 3D height which will give the right shadowing effect:
  1. If a square in the QR code is white that square should have a height of zero.
  2. If a square in the QR code is black and also has a black square directly above it then it should have a height of zero.
  3. If a square is black and the square directly above it is white then it should have a height greater than zero. Starting from that square work downwards counting the number of black squares before you get to a white square. The number of black squares is the height that square should be, e.g. if a black square has two black squares below it then a white one then the square should have a height of 3.
This can be automated easily; this is the ImageJ macro code which does this calculation:
run("8-bit");
run("Add Slice");
for (x=0; x
for (y=1; y
setSlice(1);
v=getPixel(x, y);
if (v==255) {
w=0;
} else if (v==0) {
if (getPixel(x, y-1)==0) {
w=0;
} else {
y2=y;
while(getPixel(x, y2)==0) {
y2++;
}
w=y2-y;
}
}
setSlice(2);
setPixel(x, y, w);
}
}

This picture shows the heights I calculated for each square in the QR code, black corresponds to a height of zero and each brighter shade of grey corresponds to a height of 1, 2, 3, etc:
I made a 3D model of this in Blender:
It doesn't look like much... but if you look at it from the right angle, with the right direction of lighting, the QR code pops out:
All in all pretty cool!

Software used:
ImageJ: QR code analysis
Blender: 3D modelling and rendering