fsfButton

I’m going away for 4-6 months, but please stay subscribed because…

New stuff is coming up!

Keywords: Game Development, Python, C, C++ and (hold your breath) Lisp!

New ways of thinking about the good old OOP, including functional development in Haskell and Test Driven Development and re-factoring legacy code in C++.

More 3D stuff, like Allegorithmic Substances and Modo 701.

The Blender Game Engine (programmed via Python) gets a thorough examination.

A taste of my upcoming book, called “Becoming a Real Life Developer” vol. 1 “The Stuff you Need-to-Know”.
This is a planned three-volume book series covering multiple programming languages, paradigms and languages plus how life unfolds in the real world of software development with version control systems, code reviews, the art of problem-solving and thinking “outside the box”.
It will also be geared towards team building and how to fit in to an established team of developers.
The art of becoming a software developer is very much a psychological challenge with a constant need to stay on top of the latest technologies.
My goal is to make one realize that this business is not always a bed of roses, but an extremely satisfying way of making a living if you know the right way to walk forward and get promoted without pushing yourself off the edge…

Until next time:

The Mad Man (going on a spiritual ride)

Follow-Up: Reasons to NOT only learn .Net / C#

A modern developer should master at least 3 languages!

A while ago I was all exited about finally “getting it” in sense of the more advanced parts of C#, like Generics, Events, Delegates, and Lambdas. So I wrote a post about 18 reasons you should use C# / .Net / Mono.  it kinda sounded like I’d found a salvation in it or something.

Well, I did not. I was just “on a roll”!

I’m often on these rolls. This is what gets me ahead of the competition and it’s what get’s developers with master degrees in Computer Science and programming  asking ME for help while at work from time to time. That’s wicked cool, but it has its downsides.   logoLispSpillerGlossy

I seldom sleep more than 4 hours per night, because I just HAVE to finish up a stupid project so I can understand a concept in a programming language I probably never going to use, like Cobol or something. But who knows right?

This brings me to the main theme of this blog post:

I’ve been interested in programming for a long time. Long before I started to work as a developer, I snuck in some developer tasks when working as a network and security administrator, and when working as a server and system administrator.
I managed to convince my boss that I really needed this and that book  or course so we could do this and that so much better. It worked all the time.

My job back then was basically sitting in my office with a self-made monitoring system that alerted me if some fiber link was down or a router to a branch office went bananas, something that didn’t happen very frequently. So, my responsibility was to make sure the systems ran 24/7, and that was that.
I make it sound a bit easier than it was from time to time, but the point is that I had a lot of time to study programming!

I started learning the C Language from the ubiquitous K&R book that I still keep in sight for the kicks emacs6-256of it. This was the first standardized version of C, so it had some quirks to put it mildly, but when you understood it, nothing could beat it! At least not in terms of efficiency and performance… Time to market is a nother story.
I was doing a good deal of scripting as part of my job, for automating mass installations, upgrades log-on scripts and those things, but that was in VBScript! Can you believe it? VBScript! I hated it.

So I started to replace many of the scripts with Perl and Python using the Win32 libraries, something that was way cooler.

Then I got interested in cross-platform programming in Java, because I missed a real GUI from the stripped out Tk I managed to get out of Perl and Python, and it had to run on both Windows, Linux and FreeBSD servers. While I probably would get a compile-time error writing “Hello World” in Java today I thought it was fun while it lasted.

Then the real shit hit the fan. I decided to change jobs and work as a full-time developer. My responsibilities where diverse and involved working with Legacy Borland C++ code and some C# 2.0 written in C++ style. So I had to get really serious about learning C++ and C#. The first months I went to work, did what I could, then I wrote down everything I didn’t know how to do.
When I got home I thought it to myself through books, tutorials and Googling so I could fix it the next day.

This went on for almost a year, and I was totally up to my neck in deadlines and trouble.
It wasn’t really my vision of having a job doing what I liked most.
But somehow I managed to get by and new projects came along that demanded the same level of commitment.
That’s the price to pay for adding a bit extra on the resume…the-ruby-mine-logo

Today I feel a lot safer about my skills in the “major languages” C, C++ and C#, but I’ve realized that I’ve only been doing one kind of paradigm type programming and that is the Object Oriented type of programming.

So I’ve now started to learn two new languages (or three actually), namely Common Lisp, Erlang and TCL/Tk.
What I discover each day is that the new things I learn about the more functional approach to programming could have saved my ass plenty of times if I’d known some of the same techniques when working with the projects I barely got through earlier,
I could have done many things so much simpler and more elegantly.

So my advice to all aspiring or maybe even more to all the mounted old dogs that refuses to learn new tricks is to actually do just that. Learn at least one purely functional language, one strictly OOP and one that blends it all together, like C++ or Objective-C if that’s your cup of tea. Plus I’d invest in learning one strictly interpreted language like Python or Ruby, as it makes prototyping and easier tasks, well, so much easier!.

erlang

Put a NoSQL database like MongoDB on top of this and you have an incredible toolbox for handling almost any programming task thrown at you. Word.

PS! Don’t forget those that  work hard for free to bring us some fantastic tools to develop cool things with! 

Get a  Member Sponsorship at the Free Software Foundation Today! 

fsfButton     <– LINK!

These guys are Funding the GNU Project.

Until next time, The MadMan.

Modo Logo

What do you know, now I’m back on the Modo Track…

To cite myself in my last post:

“Then we have Modo. Once my favorite polygonal modeler, back in the version 3 days. Now the whole company has been snatched by The Foundry, and all clues given leads in the direction of massive feature and price increases. So, no more Modo for me I guess.”

Then, out of curiosity, I visited the new Modo Site to check out how things where looking and what do I see?

Modo upgrades at a 40% rebate…
So, after bragging about Modo 701′s ability to run smooth on Linux in an earlier post I decided to get my CC and order the upgrade from my commercial 401 license to the latest 701 SP3. Hey, it was like $220 or something… Can’t let that slip by when I’ve always wanted to learn the ins and outs of Modo. I’ve been using it since the second version, so I figured, what a heck, it’s Christmas!

Besides, I needed a stable modeler to run on Linux besides Blender.

I mean, Blender is an awesome project, but that’s the thing. It’s a “Project” in a constant state of flux more than it is a stable “Product”, one version does things different from the next, not just in terms of the details but in the core implementation. So I find myself using about double the time on modeling something in Blender compared to using Modo, and as we all know time is our most precious asset in life.

So this winter I’ll get up to speed on the new Bullet physics implementation, the sculpting tools, the Python and the C++ API’s and the rest of this polished package.

I’ve also purchased the “Substance Designer”  version 4, which has very smooth integration with the Modo shading system, and it’s a joy to use.
I urge you to try it out, either as a trial or as a non-commercial learning tool for $99.

My goal in the world of 3D is to gather set of tools that makes a “good enough” pipeline, suitable for artists on a semi to low budget, that does things clean and efficiently and produces output that’s “on par” with expensive packages like Autodesk’s suites and tools like NukeX. Modo will definitely be a large part of this pipeline, but so will Blender and the Substance Designer from Allegorithmic.

Blender has some good compositing tools, Modo is the king of UV’s and texturing and is starting to get quite good on things like rigging and animation as well.

I believe that it should be perfectly possible to get a complete 3D pipeline of good tools for around $1000. If 3D is what you wish to do for work or as an advanced hobby, that’s not a bad price to pay. But you’ll need to be constantly monitoring the extreme offers from the tool producers via newsletters, RSS and forums.

I’m so sick of reading about a semi-professional artist with an image posted in magazines like 3D World and when they list the products used you see things like: 3DS Max, V-Ray, Nuke, Mari, Photoshop and ZBrush.
Should I seriously believe that this artist has legal licenses for software with an estimated value of $15.000 – 20.000? Seriously doubt it.
And if so, it’s NOT WORTH IT! Go figure, man!

Learn how to draw, learn Blender, Gimp and a semi-expensive package like Modo or Lightwave and you’ll be able to create just as nice results.

It’s in the hands and the mind of the artist, not in the tools. Only a poor craftsman blames his tools.

Until next time, have a wonderful Christmas you all and remember: Never stop learning!

40% off on Modo 701!

40% off on Modo 701!

 

 

 

 

 

The MadMan.

 

 

 

The Future Of Decent 3D Software for Enthusiasts and Professionals alike

Blender Image
Create what you want for free!

Ok. We all know and love Blender. But let’s face it, it has its quirks that needs some ironing out before it can totally     replace a package like Autodesk XSI or Autodesk Maya for many professional artists, or at least they think it does.

Don’t get me wrong here, it is definitely the tool of choice if you want an open source solution that can do “almost      anything”.

This is not to say that Blender has missing features, on the contrary, it has features that Maya tries to include, but has shipped with pretty serious bugs since around version 2009.
Now they have snatched a perfectly good plugin called NEX and “integrated” a.k.a. “slammed” it on top of their existing buggy tools, creating somewhat of a monster. Again. Those guys really never seem to learn the basics of Software Lifecycle Management. I’ve tried using it, but it’s so buggy that it ruins my models from time to time.

The 2014 Edition is already in SP3 and still bugs aren’t sorted. This is a frustration for many small studios that depends on stable software and don’t have the resources to create their own in-house work-around using the SDK. But what do they do? Soon they’ll release a new 2015 version with even more features and new bugs on top of that.

Then we have Modo. Once my favorite polygonal modeler, back in the version 3 days. Now the whole company has been snatched by The Foundry, and all clues given leads in the direction of massive feature and price increases. So, no more Modo for me I guess. I have my stable commercial license for Modo 401 that never stops to be valid, but hey, things evolve right? Who knows if it will be compatible with the next operating system I’m forced to update to because of other applications demands?

It still amazes me to see a company like Newtek still being alive and actually even kicking these days with the new Chronosculpt and Nevronmotion applications that lets you sculpt in time based layers and record motion data via a cheap $99 Kinect Sensor! Way to go!
How much this will be used remains to be seen, but they are on a creative roll and they NEVER give up. That is the true spirit of life. In addition they’ve released a free update to existing v. 11 (maybe also v. 10?) customers in form of Lightwave 11.6.
This edition is the most groundbreaking release since version 9.6 in my eyes. It actually is a NEW LIGHTWAVE.
A lot of new cool modeling tools and a great deal of new features on the rigging and animation side with the introduction of the Genoma system and off course an updated integration of the Bullet Physics engine, so sorely needed.
To bad I only have an educational license for version 9.6. But they do have a good offer these days, letting me upgrade to 11.6 for around $350 or something.
But, it’s Christmas and I have other posts on my budget screaming for coverage…

When it comes to a simple, fast and stable polygonal modeler we have the option of using Silo. It’s a $159 application and is an extremely competent modeler application.
But it hasn’t been actively developed for over two years.
So the reason for starting this article was really to investigate the possibility to crowd-fund a purchase of the Silo source code from Nevercenter and turn it into an actively maintained Open Source project. Personally, I have an hour or two now and then for coding some C++, Python or whatever they’ve used to create the product.

The question is: How many others are in on a project like this?

I’ve posted a poll over at CG SOCIETY (CGTalk) and for those that would be so kind, I urge you to give your vote to one of the options presented there.

I’ve been lousy at posting new stuff to my blog lately and probably have lost a lot of readers, but hopefully some remain truthful to the old madman.

Here is the link to the post on CGC:

Poll for Open Sourcing Silo 

For those that does not have a CGC account, get one! Kidding.

I’ll present the poll in this post as well, though it won’t be linked with the results on CGC, so the best is if you take the time to register over there. They even have a $19 / Year deal for a premium subscription right now (normally $79) with a lot of goodies.

All the best,

The MadMan

18 Reasons to Use C# / .Net

18 Reasons to Use C# / .Net

(some of this also explain why C# > Java)

This list is by no means complete, but may be a helper for those that wonder if they should jump on the old Java Wagon or the shiny C#/.Net Wagon with dual suspension and enhanced body construction…

  • No pointers required! C# programs typically have no need for direct pointer
    manipulation (BUT you are free to drop down to that level if absolutely
    necessary)
  • Automatic memory management through garbage collection. Given this, C# does
    not support a delete keyword.
  • Formal syntactic constructs for classes, interfaces, structures, enumerations, and
    delegates.
  • The C++-like ability to overload operators for a custom type, without the
    complexity (e.g., making sure to “return *this to allow chaining” is not your
    problem).
  • Support for attribute-based programming. This brand of development allows you
    to annotate types and their members to further qualify their behavior. For
    example, if you mark a method with the [Obsolete] attribute, programmers will
    see your custom warning message print out if they attempt to make use of the
    decorated member.
  • The ability to build generic types and generic members. Using generics, you are
    able to build very efficient and type-safe code that defines numerous placeholders
    specified at the time you interact with the generic item.
  • Support for anonymous methods, which allow you to supply an inline function
    anywhere a delegate type is required.
  • The ability to define a single type across multiple code files (or if necessary, as an
    in-memory representation) using the partial keyword.
  •  LINQ: Support for strongly typed queries used to interact with various forms
    of data.
  • Support for anonymous types that allow you to model the shape of a type rather
    than its behavior.
  • The ability to extend the functionality of an existing type (without subclassing)
    using extension methods.
  • Inclusion of a lambda operator (=>), which even further simplifies working with
    .NET delegate types.
  • An object initialization syntax, which allows you to set property values at the
    time of object creation.
  • Support for optional method parameters, as well as named method arguments.
  • Support for dynamic lookup of members at runtime via the dynamic keyword.
    This provides a unified approach to invoking members
    on the fly, regardless of which framework the member implemented (COM,
    IronRuby, IronPython, or via .NET reflection services)
  • Working with generic types is much more intuitive, given that you can easily map
    generic data to and from general System.Object collections via covariance and
    contravariance.
  • More and more companies and projects are going for the Open Source Mono Project’s implementation to create true cross-platform applications, both on desktop and mobile platforms.
  • The current version of C#
    provides a pair of new keywords (async and await), which massively simplify multithreaded and
    asynchronous programming.

You’ll be the judge! Both Languages has their charm, but personally (I learned Java many years ago) I strongly prefer C#, as you probably reasoned from this article…

Parallel Computing With CUDA Extensions (Part 2)

Parallel Computing With CUDA Extensions Part 2

A “kernel” in CUDA terms can be thought of as a series of instructions to be carried out
by a computation unit on a GPU. Basically a regular program.

1. You write kernels / code as if it where regular serial (top to bottom) programs, like
those designed to run on a single thread.

2. You tell the GPU to launch this code, and it will do so on multiple threads. (you can define how many)

But won’t this lead to the same code being executed multiple times?
Yes, it will, but that will be explained later.
If you write code to output “Hello World” and ask the GPU to run this on 50 threads,
you’ll be greeted back 50 times, and it will all happen in parallel.

But let’s say that your program code contains a vector of 50 floats.
For each of those floats you want something done, the same way, as fast as possible..
You tell the GPU to spawn 50 threads to work on this kernel (program code).

Inside the kernel when run on the GPU, each thread can connect to one vector member, and have full control over which thread works with what member of the vector.

Each thread starts doing the work as instructed in the code received from the CPU.

As an example, let’s say that when running the code on a CPU only, the CPU would have to traverse the vector members one by one, do the job needed, and continue on to the next member to do the same.
The total execution time for the task would vary based on how busy the CPU was and other factors, but let’s assume that we have 50 members that each needs 10 ms to be processed.

Easily this would take 50 x 10 ms (500 ms) to complete, as we work in a non-parallel way.
If we compare this to how the execution would be done in a GPU assisted way,
the time needed to process each element might be a bit higher, because of the general fact that the working unit will not be as fast as a regular CPU thread, so let’s say 20 ms per member.

The difference is that because these tasks are all started in parallel, they would finish processing the whole vector of 50 members in just 20 ms compared to the CPU, that would need to use 10 ms x 50 members, giving us 500 ms!

To not loose focus, it might help
to visualize situations in programming that could benefit from being able to do several equal tasks at the same time.

One thing that comes to my mind is in image editing applications. When you have an image consisting of millions of pixels, there will be several thousand pixels that share the same characteristics / properties, like color and brightness.
If you where to write a function to lighten or change the color of all those equal pixels, you’d basically have a job that could benefit from being executed simultaneously, rather than doing the same thing to each pixel in a linear fashion.

Usually, when programming using only the CPU, launching and running threads in parallel is considered an expensive and cumbersome activity.
The whole point of using the GPU as a processing unit for “regular” tasks is that it’s very good at certain things, like these two:

1. Launch a lot of threads (and “a lot” is MANY, think thousands)
2. To actually run these threads in parallel

So GPU’s makes perfect candidates for doing the kind of processing that’s lacking in regular CPU’s.

For those learning about programming, maybe as a student or on their own, I seriously believe that there will be heavy demand for competent C/C++ programmers that knows how to program using GPU assistance soon, and also into the unforeseeable future.

C and C++ might be lower-level than the languages you find most comfortable to use, but the truth is that even though these statically typed compiled languages has experienced a drop in general interest the last ten years, they’re now on the rise again thanks to technologies like this and because of the importance of power consumption / watts per cycle on modern handheld devices.

C++ is the most efficient language to use for low power consumption devices (if done right) compared to any other high-level language in existence today, and many large companies invests huge sums of money to the driving forces behind these languages now.

The future is mobile and the future is (hopefully) green.
To achieve this, we also need to start making software that’s green and environmentally friendly.

I hope this article has made you more interested in learning about GPU assisted processing using tools such as CUDA or OpenCL.

There’s more in the world than an Apple.

Parallel Computing With CUDA Extensions (Part 1)

cuda_spotlight

Parallel Computing With CUDA Extensions (Part 1)

First, let’s see how to rate a CPU in a parallel way of thinking.

Let’s say we have an eight Core Intel CPU.
That’s:

With eight cores, you can execute 8 operations (Wide AVX vector operations) per core,
and each core has support for running two threads in parallel via Intel “HyperThreading” technology, so you get:

8 cores * 8 operations/core * 2 threads and end up with what’s called
“128-Way Parallelism”

For more about AdvancedVectoreXtentions (AVX) in CPU’s, check this page.

Programming without taking advantage of ANY multithreading / parallel processing
techniques, means that for each program you run, you use

2/128 = 1/64 of your CPU’s total resources (including the automatic “HyperThreading”).

In an ordinary C/C++ program you can only run code that uses the CPU as
the computing resource.
If people really took advantage of their cores and threading capabilities, this would
probably be enough for most regular applications, but for applications that does a lot of
heavy calculations, like video / image processing or 3D graphics it’s way better if you could
offload some of these tasks to the simpler (in terms of instructions), but well capable GPU(‘s) in your machine.

One way to do this is through the use of CUDA extensions.

In this model, the CPU is considered the “HOST” and each GPU is a “DEVICE”
in your system that can be used for doing calculations.
When such a program is compiled, instructions for both the HOST and any DEVICE
is created.
In CUDA the GPU/DEVICE is seen as a “CO-PROCESSOR” to the CPU/HOST.
The processor also assumes that the HOST and DEVICE has access to separate physical
memory where they can store data.
The DEVICE memory is typically a very high-speed block of memory, faster than the one
on the HOST.

The HOST is “In charge” in CUDA and sends messges to the DEVICE telling it what to do.
The HOST keeps track of:

Moving data:
1. From CPU memory -> GPU memory
2. Grom GPU memory -> CPU memory
CUDA’s version of C’s memcpy() is cudaMemcpy()
3. Allocating GPU memory
Again CUDA uses cudaMalloc() instead of malloc()
4. Launch “kernel” on GPU (in CUDA, the HOST launches “kernels” on the DEVICE)

A Typical flow in a CUDA Application would be something like:

1. CPU runs cudaMalloc on GPU
2. CPU copies input data from CPU->GPU with cudaMemcpy
3. CPU launches the transfered “kernels” on GPU (kernel launch)
4. CPU copies results back with cudaMemcpy

So, what is this “Kernel” stuff all about?

Guess we’ll find out in part 2 of this series…