The MadMan is back.

 

More Madness

 

When I read my last post I must say I really must have had a boost of optimism and god-like energy over me back then, eh?

Books to be written, 3D art to be made, Allegorithmic products to review, Python programming, Blenders Game Engine, Modo stuff.. Phew!

Well. I have actually done something. I won’t present anything here today, but some things stayed and some things did not.

I’m still into writing the book series, that’s one. (But it will be a bit postponed)

What I’m still into is off course Blender! It has gotten a LOT of updates the last seven months.
I’m still willing to create a tutorial series on the BGE (Blender Game Engine) which we all know is programmed with Python. Actually A LOT of Blender is programmed with Python now. It’s just the low level bits and pieces left in hard core C / C++ (?asm?).

I was planning on starting on this in a couple of weeks and off course post it on this blog, if it is any interest for it.

As some of you know, I’m a software developer, mainly using C# and C++, but I’ve also been using some Python through the years.

Lately I’ve been keen on learning to program GAMES. What kind of games? No idea. Just games. Fun programs that can include gaming elements, like “gamification” or something.

So I’ve downloaded the latest Unreal 4 Engine, but that was totally overkill for me. I need to know the basics first.

Even Unity seems a bit cryptic to me sometimes. But that’s mostly because of the horrific implementation of C# / .Net they’ve chosen. Goes against all good usage patterns.

So now I’m going to test out Blenders Game Engine, to see if I can make any sense of that.

Along the way will be some tutorials and some Python code for those interested.

 

Also I’ll look into some rendering with Cycles, Thea Render and LuxRender…

 

Until next time…

Blender Logo

 

 

 

 

 

The Future Of Decent 3D Software for Enthusiasts and Professionals alike

Blender Image
Create what you want for free!

Ok. We all know and love Blender. But let’s face it, it has its quirks that needs some ironing out before it can totally     replace a package like Autodesk XSI or Autodesk Maya for many professional artists, or at least they think it does.

Don’t get me wrong here, it is definitely the tool of choice if you want an open source solution that can do “almost      anything”.

This is not to say that Blender has missing features, on the contrary, it has features that Maya tries to include, but has shipped with pretty serious bugs since around version 2009.
Now they have snatched a perfectly good plugin called NEX and “integrated” a.k.a. “slammed” it on top of their existing buggy tools, creating somewhat of a monster. Again. Those guys really never seem to learn the basics of Software Lifecycle Management. I’ve tried using it, but it’s so buggy that it ruins my models from time to time.

The 2014 Edition is already in SP3 and still bugs aren’t sorted. This is a frustration for many small studios that depends on stable software and don’t have the resources to create their own in-house work-around using the SDK. But what do they do? Soon they’ll release a new 2015 version with even more features and new bugs on top of that.

Then we have Modo. Once my favorite polygonal modeler, back in the version 3 days. Now the whole company has been snatched by The Foundry, and all clues given leads in the direction of massive feature and price increases. So, no more Modo for me I guess. I have my stable commercial license for Modo 401 that never stops to be valid, but hey, things evolve right? Who knows if it will be compatible with the next operating system I’m forced to update to because of other applications demands?

It still amazes me to see a company like Newtek still being alive and actually even kicking these days with the new Chronosculpt and Nevronmotion applications that lets you sculpt in time based layers and record motion data via a cheap $99 Kinect Sensor! Way to go!
How much this will be used remains to be seen, but they are on a creative roll and they NEVER give up. That is the true spirit of life. In addition they’ve released a free update to existing v. 11 (maybe also v. 10?) customers in form of Lightwave 11.6.
This edition is the most groundbreaking release since version 9.6 in my eyes. It actually is a NEW LIGHTWAVE.
A lot of new cool modeling tools and a great deal of new features on the rigging and animation side with the introduction of the Genoma system and off course an updated integration of the Bullet Physics engine, so sorely needed.
To bad I only have an educational license for version 9.6. But they do have a good offer these days, letting me upgrade to 11.6 for around $350 or something.
But, it’s Christmas and I have other posts on my budget screaming for coverage…

When it comes to a simple, fast and stable polygonal modeler we have the option of using Silo. It’s a $159 application and is an extremely competent modeler application.
But it hasn’t been actively developed for over two years.
So the reason for starting this article was really to investigate the possibility to crowd-fund a purchase of the Silo source code from Nevercenter and turn it into an actively maintained Open Source project. Personally, I have an hour or two now and then for coding some C++, Python or whatever they’ve used to create the product.

The question is: How many others are in on a project like this?

I’ve posted a poll over at CG SOCIETY (CGTalk) and for those that would be so kind, I urge you to give your vote to one of the options presented there.

I’ve been lousy at posting new stuff to my blog lately and probably have lost a lot of readers, but hopefully some remain truthful to the old madman.

Here is the link to the post on CGC:

Poll for Open Sourcing Silo 

For those that does not have a CGC account, get one! Kidding.

I’ll present the poll in this post as well, though it won’t be linked with the results on CGC, so the best is if you take the time to register over there. They even have a $19 / Year deal for a premium subscription right now (normally $79) with a lot of goodies.

All the best,

The MadMan

Parallel Computing With CUDA Extensions (Part 2)

Parallel Computing With CUDA Extensions Part 2

A “kernel” in CUDA terms can be thought of as a series of instructions to be carried out
by a computation unit on a GPU. Basically a regular program.

1. You write kernels / code as if it where regular serial (top to bottom) programs, like
those designed to run on a single thread.

2. You tell the GPU to launch this code, and it will do so on multiple threads. (you can define how many)

But won’t this lead to the same code being executed multiple times?
Yes, it will, but that will be explained later.
If you write code to output “Hello World” and ask the GPU to run this on 50 threads,
you’ll be greeted back 50 times, and it will all happen in parallel.

But let’s say that your program code contains a vector of 50 floats.
For each of those floats you want something done, the same way, as fast as possible..
You tell the GPU to spawn 50 threads to work on this kernel (program code).

Inside the kernel when run on the GPU, each thread can connect to one vector member, and have full control over which thread works with what member of the vector.

Each thread starts doing the work as instructed in the code received from the CPU.

As an example, let’s say that when running the code on a CPU only, the CPU would have to traverse the vector members one by one, do the job needed, and continue on to the next member to do the same.
The total execution time for the task would vary based on how busy the CPU was and other factors, but let’s assume that we have 50 members that each needs 10 ms to be processed.

Easily this would take 50 x 10 ms (500 ms) to complete, as we work in a non-parallel way.
If we compare this to how the execution would be done in a GPU assisted way,
the time needed to process each element might be a bit higher, because of the general fact that the working unit will not be as fast as a regular CPU thread, so let’s say 20 ms per member.

The difference is that because these tasks are all started in parallel, they would finish processing the whole vector of 50 members in just 20 ms compared to the CPU, that would need to use 10 ms x 50 members, giving us 500 ms!

To not loose focus, it might help
to visualize situations in programming that could benefit from being able to do several equal tasks at the same time.

One thing that comes to my mind is in image editing applications. When you have an image consisting of millions of pixels, there will be several thousand pixels that share the same characteristics / properties, like color and brightness.
If you where to write a function to lighten or change the color of all those equal pixels, you’d basically have a job that could benefit from being executed simultaneously, rather than doing the same thing to each pixel in a linear fashion.

Usually, when programming using only the CPU, launching and running threads in parallel is considered an expensive and cumbersome activity.
The whole point of using the GPU as a processing unit for “regular” tasks is that it’s very good at certain things, like these two:

1. Launch a lot of threads (and “a lot” is MANY, think thousands)
2. To actually run these threads in parallel

So GPU’s makes perfect candidates for doing the kind of processing that’s lacking in regular CPU’s.

For those learning about programming, maybe as a student or on their own, I seriously believe that there will be heavy demand for competent C/C++ programmers that knows how to program using GPU assistance soon, and also into the unforeseeable future.

C and C++ might be lower-level than the languages you find most comfortable to use, but the truth is that even though these statically typed compiled languages has experienced a drop in general interest the last ten years, they’re now on the rise again thanks to technologies like this and because of the importance of power consumption / watts per cycle on modern handheld devices.

C++ is the most efficient language to use for low power consumption devices (if done right) compared to any other high-level language in existence today, and many large companies invests huge sums of money to the driving forces behind these languages now.

The future is mobile and the future is (hopefully) green.
To achieve this, we also need to start making software that’s green and environmentally friendly.

I hope this article has made you more interested in learning about GPU assisted processing using tools such as CUDA or OpenCL.

There’s more in the world than an Apple.

Parallel Computing With CUDA Extensions (Part 1)

cuda_spotlight

Parallel Computing With CUDA Extensions (Part 1)

First, let’s see how to rate a CPU in a parallel way of thinking.

Let’s say we have an eight Core Intel CPU.
That’s:

With eight cores, you can execute 8 operations (Wide AVX vector operations) per core,
and each core has support for running two threads in parallel via Intel “HyperThreading” technology, so you get:

8 cores * 8 operations/core * 2 threads and end up with what’s called
“128-Way Parallelism”

For more about AdvancedVectoreXtentions (AVX) in CPU’s, check this page.

Programming without taking advantage of ANY multithreading / parallel processing
techniques, means that for each program you run, you use

2/128 = 1/64 of your CPU’s total resources (including the automatic “HyperThreading”).

In an ordinary C/C++ program you can only run code that uses the CPU as
the computing resource.
If people really took advantage of their cores and threading capabilities, this would
probably be enough for most regular applications, but for applications that does a lot of
heavy calculations, like video / image processing or 3D graphics it’s way better if you could
offload some of these tasks to the simpler (in terms of instructions), but well capable GPU(‘s) in your machine.

One way to do this is through the use of CUDA extensions.

In this model, the CPU is considered the “HOST” and each GPU is a “DEVICE”
in your system that can be used for doing calculations.
When such a program is compiled, instructions for both the HOST and any DEVICE
is created.
In CUDA the GPU/DEVICE is seen as a “CO-PROCESSOR” to the CPU/HOST.
The processor also assumes that the HOST and DEVICE has access to separate physical
memory where they can store data.
The DEVICE memory is typically a very high-speed block of memory, faster than the one
on the HOST.

The HOST is “In charge” in CUDA and sends messges to the DEVICE telling it what to do.
The HOST keeps track of:

Moving data:
1. From CPU memory -> GPU memory
2. Grom GPU memory -> CPU memory
CUDA’s version of C’s memcpy() is cudaMemcpy()
3. Allocating GPU memory
Again CUDA uses cudaMalloc() instead of malloc()
4. Launch “kernel” on GPU (in CUDA, the HOST launches “kernels” on the DEVICE)

A Typical flow in a CUDA Application would be something like:

1. CPU runs cudaMalloc on GPU
2. CPU copies input data from CPU->GPU with cudaMemcpy
3. CPU launches the transfered “kernels” on GPU (kernel launch)
4. CPU copies results back with cudaMemcpy

So, what is this “Kernel” stuff all about?

Guess we’ll find out in part 2 of this series…

Using the Digia Qt Framework (LGPL) for “commercial” applications

This post is targeted at any developer that needs a good framework to develop true cross platform applications.

With “true cross platform” I mean that you can build for Windows, Apple OS X, Linux, Android and other mobile platforms from the same code sources.

The licensing terms of Qt is the same whether you’re an “indie developer” or a team of developers.

There are many people out there, unsure if they should learn / use the Qt Framework, because of the relatively unclear definitions of the different licensing options.

Many feels that it would be a waste of time if they can’t sell the application in the end, without paying for a rather expensive Qt developer license or get legal problems if they include a certain part of their code.

The aim of this article is to clear up some of that uncertainty.

Qt isn’t just a GUI Framework.
You can use it to create just about any modern application with networking capabilities, 2D / 3D graphics, sound, you name it.

It’s a really nice way to get applications working on multiple devices and platforms, in the shortest time possible, and without resorting to some JavaScript “miracle application” that turns your source code into “working applications” for both desktop and mobile units and costs a fortune.

The user interface part (gui) of a Qt application looks really good and integrates with the platform it’s running on, so you won’t notice any difference from using an application created to run native on that device, like an OS X application written in Objective-C using Cocoa.

Qt isn’t only for C++ developers!
You can (and should) also use an XML like language called QML to create the user interface part of the application.
It’s even possible to create a complete simple application using nothing but QML if you wish.

This opens up for closer collaboration between designers and back-end coders.
It must be said that Qt is mainly a C++ framework, and to get access to the full functionality you need to use C++,
but for the user interface interface part, QML is the way to go for most projects. It saves you many hours of C++ coding.

So, what about the licensing question, man?

Oh, off course…

If the scenario goes something like:

“You’re a single developer or a team of developers that needs to use some functionality of Qt (like QtWidgets) in your application…”

The short answer is:

Relax, learn and use Qt, you can make money on the applications even if you use the LGPL version of the framework.
You need only be aware of two very important “limitations”.

1. Your application must be dynamically linked to the Qt components that came with your downloaded LGPL Qt distribution. No static linking allowed.

2. You can’t make changes to the Qt source code itself and sell your application based on the changed version of Qt.

3. Inform users of your application that Qt is used in the application in some licence text or in a readme somewhere within your distributed application files.

4. Provide a copy of the Qt LGPL licence file together with your distributed application files.

Other than that, you’re free to distribute your application as a commercial closed-source application, using any licensing you may want.

There are also corner cases, such as if you’ve made changes to Qt and those changes has been officially accepted as part of Qt, but this is not the case for most developers.

Hope this cleared things up a bit.

So start learning Qt and happy coding!

The series on Modern C programming has a new home!

Future content-posts for the series has a new home!

I’m a big fan of Evernote.
I use it for pretty much everything, from recipes, book chapter drafts, digital scanning of all my physical snail mail, project organizing and to-do lists to (from this day) blog editing!

I’ve found a incredibly useful (for me at least) service called Everblog, that lets me write and edit my blog posts straight from within the Evernote application.

This will be a huge timesaver for me, as I use Evernote for all my resource gathering and pre-press work for my planned book, “Real C, Real Value – A modern approach to C programming” and for my other writing projects.

So, all future posts to the series I plan on publishing to the group will live there, and discussions on the topics will live on LinkedIn.
I’ll use that blog and this group for gathering feedback to my articles which again will turn into chapters for the book on C.

My list on planned practical projects for the book is something in the lines of:

1. “A gentle introduction to the C language, C11 style”
2. “Introduction to Socket programming in C”
3. “Cross platform GUI programming in C”
4. “Connecting the sockets and GUI into a usable program”
5. “Introduction to graphics and image processing in C”
6. “Using a game engine”
7. “Writing a simple 2D game with what we’ve learned so far”
8. “C Programming and sound processing”
9. “Adding sound to our game”
10. “Creating a graphical Mp3 player in C”
11. “Platform specific challenges in multi-platform programming”
12. “A theory chapter on algorithms and the Standard C Library”
13. “What’s Next?”

As you can imagine, there is a lot of writing to do, and I need a place to get feedback on the way, hence the LinkedIn group and the blog.

I’m off to Copenhagen for the weekend, but I’ll post the next update on Monday”

Chris

I’ve started a new LinkedIn group! C Programming for a Modern Era

There are lots of C programmers. There are lots of C programming groups and forums. So why create another?
Well I’m starting a tutorial series on my blog about modern C programming using the latest standard of the language as a base, not considering compatibility with “old” versions of C, but harnessing the possibilities that now exists in the language and libraries. The additions to the C language the last ten years or so are substantial, and a lot of stuff that was really cumbersome to do in C back then is now as easy (or easier) as in any modern high-level language.

My view on C has always been that it is a really useful language to learn, because most other programming languages in widespread use today are based heavily on it, and because it lets the programmer experience the detailed nature of high-level data structures, and how the low-level structure of the data that implements it really looks and behaves.

With the latest standards of the C Programming Language it’s no longer a hard task to learn how to create basic C programs that actually works.

It’s a long time since I used C, but lately I’ve started to pick it up again on my spare time, because of all the cool new features that was “missing” in the language before.

If you feel like doing some modern C programming, please join this group and come along for the ride.

I’ve joined LinkedIn! Please help me build a professional comp. sci / CGI network!

Hi there, dear readers of this blog!

I’ve recently joined LinkedIn, and I’m trying to build a network consisting of the best in the computer science, software engineering and CG industries.

I’ve gotten a couple of big fish allready, but my network is still super-tiny!

I’m hoping to use this network for both socializing and for sharing technical issues and to present new thoughts and ideas for future products.

I believe a network fusion of these three industries will be an important avenue for anyone interested in fronting their work on the public arena, absolutely free.

So please join and invite anyone you think should be in there. Anybody with a passion and some talent in any of these three fields are very welcome to join!

So please check my profile and join my network over at:

http://www.linkedin.com/pub/chris-sederqvist/57/713/103/en
Vis Chris Sederqvist sin LinkedIn-profilVis profilen til Chris Sederqvist

Dynamic Parallelism in CUDA Version 5!

After using all my spare time on Blender lately, I’m now going to digress into another realm.

After reading a TechBrief at the Nvidia Cuda Developer Society I had to wrap my head around something other than modeling, just for a little while! 🙂

In CUDA Version 5, you can now call a CUDA kernel from within another, without going via the CPU.
The “parent” kernel will launch a “child” grid, which can itself also create new work to form an execution hierarchy. The “parent” will only signal as completed once all children are done processing.
The recursive depth will be dependent on your GPU resources.

So, Dynamic Parallelism in CUDA 5 enables the use of a CUDA kernel to create (and sync) nested work via the device runtime API for triggering other kernels, perform memory management on the device and create streams and events all without needing to use a single line of CPU code!
A CUDA Kernel can also call GPU Libraries such as CUBLAS directly, without any CPU intervention.

The Device Runtime API in CUDA C/C++ is a subset of the CUDA Runtime API for the Host, keeping the same syntax for easy code reuse.

Here is an example of calling a kernel from within a kernel:

__global__ KernelChild(void* data){
 //Do something
}
__global__ KernelParent(void *data){
 if (threadIdx.x == 0) {
 KernelChild<<<1, 32>>>(data);
 cudaThreadSynchronize();
 }
 __syncthreads();
 //Do something
}
// On Host
KernelParent<<<8, 32>>>(data);

Reducing the traffic between the GPU and CPU on the PCI bridge will bring a key performance boost for things like fluid dynamics simulations or similar stuff requiring pre-processing passes over the data.

GPU Computing rocks!

Happy Summer Holidays!

Making the move, “From Commercial 3D Software to Blender” Part 1

So. I have used software like Maya, Lightwave and Modo for some time now. The reason I could do this is because I have gotten educational licenses for the software. Well, I actually have a commercial license for Modo but anyway…
If I want the latest editions of these packages, I would have to upgrade my educational licenses to commercial, and that is anything but cheap!
My economy these days suck big time, so I soon came to the conclusion that this would be impossible.
So what are my options? Blender!
I have tried Blender many times before, each time there is a new version I download it and goof around in the the viewport, but every time I have been put off by the non-standard way of doing things compared to my other software.

The thing is that Blender IS a bit different from everything else, but that is not necessarily a bad thing.
It just takes some getting used to. I have never before actually tried to LEARN Blender, but this time I have seriously committed myself into learning this software.
I will not lie, it has not been all easy, and I still am in the very early stages of “Mastering Blender”, but for each day I learn something new, and discover the cool things that is different, but good.
So, my goal with this series of posts will be to share my experiences with moving from commercial software packages like Maya, Lightwave and Modo to Blender.
I’ll compare features and methods of doing things along the way as I discover this myself.

There are many good reasons for using Blender. These are my reasons:

  • Cost (this one is obvious)
  • In the latest version Blender has gotten bMesh and bSurfaces
  • The integration of Bullet Physics (Same as in Lightwave, Modo, Maya 2013)
  • A very active and vibrant user community
  • Always get the latest innovations for Free (as in beer)
  • Deep support for scripting with Python via the API
  • A new rendering engine with a superb node based shader system (Cycles)
  • Great modeler (once you get used to it)
  • Some of the best UV tools available both commercially and free

So this is a pretty impressive list if you ask me.

I’ll continue sharing my experiences with this project, hopefully some of you will see that there is actually an alternative to Autodesk!

I don’t say that Blender is the best tool for everything, but neither is any of the commercial packages. All has its flaws.
But if you ever thought of switching to Blender, now is a good time to do so! New tools and features makes Blender a serious competitor in the market of professional tools for creating Digital Art.

Happy Blending!