<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected {color:[[ColorPalette::PrimaryDark]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:alpha(opacity=60);}
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0; top:0;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0 3px 0 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0; padding-bottom:0;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox where print preview displays the noscript content */
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

Also see [[AdvancedOptions]]
//Disclaimer: I am a great supporter of machine learning. I studied this subject extensively and I used it in my own work with great success. However, I am somewhat annoyed to review papers that use machine learning improperly or without quite knowing what they are doing. This tiddler is a rant against the abuse of machine learning in the scientific literature.//

Many researchers and practitioners regard machine learning as dogmatic all-powerful technique that can solve all problems. A sort of computational garden of Eden. Truth is that machine Learning is nothing more than a heterogeneous collection of specialized statistical methods. There are indeed contexts in which machine learning works very well, and it is the right method to use. There are some other contexts in which you can pretend that machine learning works well by hand-picking one of many methods in the literature and funneling your data through it. But there are also many cases in which machine learning is nothing more that a contrived scientific exercise to get a paper published in a major conference.

The main problem with machine learning"""--which is also its greatest strength--""" is that is very general. Many disparate problems can be reformulated to work with your favorite learning algorithm. Of course doing so requires good dexterity with math, but this should not be a problem for most researchers. My grievance is that you do not really need to understand how the method works to use it. In other words, machine learning is a so-called //black-box// for most purposes. You feed your data into the algorithm and sure enough you will get something back. Now the real question is whether or not what you get makes sense.

You may argue that having a generic //black-box// is actually very good. We are always looking for self-contained algorthms that work well with a large number of problems. But I do not think this is what academic research is about. A black-box is excellent when you want to engineer a system that works well for your application; when you want to //sell// it. And, I love stuff that works well! However, in research you want to make people //understand// what you are doing. Everyone can pour its data through an existing method and show you the results without knowing what is going on. The real challenge is to shed light on the intricacies of your method and why it works the way it does.

The trick about machine learning is that you can always give the impression that your technique works well by picking the right data and lots of it. In fact, most machine learning techniques work quite well if you test them with data that is very similar to your training set, but this is really not the point of machine learning. The key about machine learning is //generalization//. Ideally you would like to devise an algorithm that is trained on a small data set, and can handle a variety of problems in the domain of the data. Yet unfortunately, many algorithms that are proposed in the literature seem to work only for a small sub-set of rather specific scenarios. What makes the problem even more difficult it is that we don't have good quantitative tools to express the ability of an algorithm to generalize. In fact, some, researchers even argue that coming up with a good approach for machine learning is more of an art than a science. 

When you face a difficult technical challenge and you are a researcher, the first task is always to //understand// the problem. An understanding of the problem helps you develop a model. If your model is good enough, then you can build an algorithm around it and happily code up a solution. Say, you want to develop a spell checker, so you study the rules of grammar and define a set of rules that detect potential errors in the text. This is a somewhat trivial example, but it proves the point. Unfortunately many problems are too difficult to model, and even moderately expressive models are simply computationally intractable. Then there are problems that are highly data dependent, and again for these problems there is little hope of developing a model that is expressive enough. So, in these cases """--when everything else fails that is--""" your last resort is machine learning. With machine learning you trade off an understanding of the problem and some guarantees on your solution method with something that works in practice. This is an //important// trade off, but in some cases is the right trade off. Machine learning is the proper tool for these kinds of problems. Face recognition is a perfect example. For years, researchers, tried to develop sophisticated appearance models of  faces and facial expressions. Some models were very intricate, but they all failed to provide a good method for face detection. Then Viola and Jones proposed a statistical [[method|http://research.microsoft.com/en-us/um/people/viola/Pubs/Detect/violaJones_IJCV.pdf]] that was in many ways simpler than previous ones, but outperformed everything else. Their solution was very remarkable and thoughtful. What I don't like is when researchers simply try to //throw// machine learning at all kinds of problems until they find that one problem, where it seems to work well. Did anyone say PCA...
Digital delivery is growing in popularity as a means for distributing content, but it took a while for people to accept it as a viable alternative to retail stores. First of all, there is a strong cultural inertia against it. People are used to barter money for physical objects that they can see, touch, and smell. Digital content instead is rather intangible and its purchase is therefore very unsatisfying. As a matter of fact, even eCommerce struggled for the same reason in the beginning, but now we accept it because we are somehow more accustomed to the kind of //virtual money// that flows out of our credit cards invisibly. Yet when you buy something from Amazon or any other online outlet, you still get a physical object delivered to your house. With digital delivery, you have to go one step further -- you exchange virtual money for a virtual object. Virtual objects, of course, are rapidly becoming more concrete and tangible now that computers and digital content play such an important role in our lives. Still, for a long time, when you bought digital content you got a physical medium, like a CD or DVD, packaged in a protective case of some sort. You could still touch it, see it, and store it in your bookshelf. In a sense, holding a disc in your hands makes you feel that you fully own it. Do you feel that you own those DRM encumbered mp3s you buy from an online music store? What about those dusty CDs in that old organizer of yours instead?

Data streamed from an online service to your hard drive feels also rather volatile. A computer file can be erased, your hard drive can get corrupted, or the remote service can simply decide to pull the plug and disable your content at some point. This very issue spurred heated debates. Consumers demanded the right to protect their purchase by making legal backups, but for a while companies rejected these demands as an euphemism for computer piracy -- and they weren't entirely wrong. However, things have improved, and most services allow you to download content that you own as many times as you want. But what happens when these services go out of business or decide to discontinue your product?

Digital content is often targeted at geeks, and for geeks digital delivery may feel unsatisfying for yet another reason. In fact, you don't need to be a computer nerd to know about //ways// to get stuff for free. Free movies, free music, free games are not hard to find, but a decent number of people have the conscience and ethics to pursue legal alternatives. Now, what's the difference between a file that you download legally and another that you obtain by other means? There is almost no difference. Actually legal downloads are worse, because they are often burdened by annoying protection mechanisms and other DRM nonsense. In other words, there is an even smaller incentive for those familiar with computers to pay for a service of digital delivery.

Despite all these challenges digital delivery is becoming ever so popular, and more and more people are beginning to accept it. How did that happen? 
To set up a Folder on the Mac to be shared by a Windows machine the following guide is still valid though old:


The real problem is how to log in with the desired account on the Mac from Vista. Here is a solution
* In windows explorer in the address bar type: //"""\\IP_address_of_Mac\short_name_of_mac_account"""//
* In the log in prompt use:
** username: //"""name_of_workgroup\short_name_of_mac_account"""//
Lyx is a great program for typesetting documents using the power of Latex without all the hassle that comes with it. However, it is sometimes difficult to get Lyx to conform exactly to specific layout guidelines, since you do not have direct control of the underlying Latex engine. One common problem is how to use a special document class provided by the publisher with Lyx. The process to accomplish this is slightly different depending on the platform you are working on.

''Mac OS X with the MacTex Distribution''  

//Add the class file//
Say, we want to add the class {{{MyCrazyStyle.cls}}} to our latex distribution.
In a standard MacTex distribution, all Latex class files are located in the folder
We first need to create a sub-folder called {{{MyCrazyStyle}}} and then copy the file in it.
NOTE: you need {{{sudo}}} privileges to modify the Latex folder.

//Add the bibliography file (if needed)//
All bibliography files are located in the folder
As before, we need again to create a new folder and place the new {{{bst}}} file in it. The final location of the file will be then

// Add a new layout//

Create a new text file and lines similar to the following
#% Do not delete the line below; configure depends on this      
#  \DeclareLaTeXClass[IEEEconf]{article (IEEEconf)}

# Read the definitions from article.layout
Input article.layout
Here the name in the square brackets is the actual name of the class used in the Latex preamble (e.g. {{{\documentclass[a4paper, 10pt, conference]{IEEEconf}}}}) and the name in the curly braces in the label that you will see in the Lyx options (see below).
# Save the file with extension {{{.layout}}} in the folder {{{$HOME/Library/Application Support/LyX-1.6/layouts/}}}

//Reconfigure Lyx//
* Open Lyx and under the Lyx menu select {{{reconfigure}}}.
* Close and reopen Lyx.

//Apply the class//
* Create a new document and select the new class from //document->settings->Document Class//.
* You should now to have your document typeset according to the new class.

''Windows with the MikTex Distribution''  

//Add the class file//
Say, we want to add the class {{{MyCrazyStyle.cls}}} to our latex distribution.
In a standard MikTex distribution, all Latex class files are located in the folder
{{{C:\Program Files\MiKTeX 2.8\tex\latex}}}
We first need to create a sub-folder called {{{MyCrazyStyle}}} and then copy the file in it.
Now, go to the command prompt and type {{{texhash}}} to force MikTeX to update its index with the new class.
NOTE: you need administrative privileges to modify this folder in Windows Vista / 7.

//Add the bibliography file (if needed)//
All bibliography files are located in the folder
{{{C:\Program Files\MiKTeX 2.8\bibtex\bst}}}
As before, we need again to create a new folder and place the new {{{bst}}} file in it. The final location of the file will be then
{{C:\Program Files\MiKTeX 2.8\bibtex\bst\MyCrazyStyle\MyCrazyStyle.bst}}}

// Add a new layout//

Create a new text file and lines similar to the following
#% Do not delete the line below; configure depends on this      
#  \DeclareLaTeXClass[IEEEconf]{article (IEEEconf)}

# Read the definitions from stdclass.inc
Input stdclass.inc
Here the name in the square brackets is the actual name of the class used in the Latex preamble (e.g. {{{\documentclass[a4paper, 10pt, conference]{IEEEconf}}}}) and the name in the curly braces in the label that you will see in the Lyx options (see below).
# Save the file with extension {{{.layout}}} in the folder {{{C:\Program Files\LyX16\Resources\layouts}}}

//Reconfigure Lyx//
* Open Lyx and under the tools menu select {{{reconfigure}}}.
* Close and reopen Lyx.

//Apply the class//
* Create a new document and select the new class from //document->settings->Document Class//.
* You should now to have your document typeset according to the new class.

* http://wiki.lyx.org/Layouts/CreatingLayouts
* http://stefaanlippens.net/customLaTeXclassesinLyX
* http://wastedmonkeys.com/2007/09/27/adding-a-new-class-in-lyx-windows
When you open a old project from XCode 3 in XCode 4, the new version of the IDE will add a few additional files inside the project bundle:
* xcuserdata
* project.xcworkspace
* $USER.perspectivev3
These files are likely needed to help XCode 4 add new functionality to the project settings without breaking compatibility. 

This change may have some side effects if you keep your XCode projects under version control. For instance, I once tried to revert to an older version after I made some bad changes to one of my projects, but nothing happened. Then I realized that the new change was saved in one of these new files added by XCode 4, but these files were not under source control, so my revert command had no effect.

If you do keep your XCode project under source control, you may want to skip the file with extension {{{perspectivev3}, since that file store changes specific to the user and it my pollute the environment of other developers.
I discussed before that Boost provides a clever [[mechanism|Dispatching Shared Pointers From a Class Method]] for dispatching shared pointers to {{{this}}} directly from a class method. However, there are a few circumstances in which calling the {{{shared_from_this}}} method triggers an exception. 

//Dispatching the this pointer at construction//
The {{{this}}} pointer is only valid when an object is fully constructed, therefore it does not make sense to dispatch it in a constructor. Boost will raise the {{{weak_ptr_cast}}} exception if you try to do that.

//Dispatching the this pointer from the stack//
This is a very subtle detail about shared pointers. You cannot invoke {{{shared_from_this}}} if the class has been instantiated on the stack. The reason is that the {{{enable_shared_from_this}}} base class requires at least one shared pointer instance that owns {{{this}}}.

Let's look at what would happen if you could simply allocate a instance on the stack


class A : public boost::enable_shared_from_this< A >
  boost::shared_ptr< A > getme()
    return shared_from_this();
  void printme()
    std::cout << "hello world!" << std::endl;

boost::shared_ptr< A > g_sharedA;

int foo()
  A myA;
  g_sharedA = myA.getme();

int main()
  assert(g_sharedA.get() != NULL );


After I call {{{foo}}} the shared pointer {{{g_sharedA}}} still behaves like it is valid, but in fact the object it references has been destroyed. As a result the assertion will succeed, but as soon as I access the shared pointer on the last line, the software will crash. By explicitly requiring a class to live in the heap and have a valid shared pointer that owns it, the designers of boost cleanly removed the chance of ever having a dangling smart pointer.

The smart pointer library in Boost is designed to provide a robust and reliable mechanism for memory management in C++. Most importantly smart pointers are meant to eliminate all software faults due to dangling pointers, which are one of the biggest flaws that plague C/C++ code. As a result smart pointers must become NULL as soon as the object they manage is destroyed and they should never return a reference to an invalid item in memory. While the constraints I discussed in this tiddler do in fact limit the usage of smart pointers in practice, they are also a healthy choice that helps Boost fulfill its promise of delivering higher software quality for all.  

* http://stackoverflow.com/questions/459414/boost-weak-ptr-cast-in-shared-from-this
* http://www.boost.org/doc/libs/1_45_0/libs/smart_ptr/enable_shared_from_this.html
The command prompt in Windows is truly an ancient piece of software. There are some commonly used alternatives for it such as Cygwin or Microsoft's new PowerShell, but these solutions don't understand standard DOS commands and operate quite differently. What if you just wanted to run DOS commands in a shell that is more in touch with the 21st century that the crappy {{{cmd.exe}}}?

Luckily there is a solution, it's free, it's awesome, and it's called [[PowerCmd|http://www.powercmd.com]]. 
Any serious computer user sooner or later must commit his time to clean up some junk from disk to free up space. Simply emptying the trash and removing you browser cache is hardly effective anymore these days. Likewise, navigating the file system and deleting random files as they come along does not cut it either, given the sheer number of files that live on our drives now. So what to do? It turns out that there are a few very useful tools that help you visualize the contents of your hard drive and make it extremely easy and intuitive  to find stuff that can be thrown away. 



[[Disk Inventory X|http://www.derlien.com/]]

Despite having different names, these three open source applications are almost exact clones of each other, although KDirStat is the one that came up with the idea first!
One of the easiest ways to batch convert files is to use ImageMagick's command line tools:

# Install ImageMagick 
# open the Command Line
# Navigate to the folder containing the files to convert
# Type {{{ mogrify -format png *.ppm }}}

This should convert all the ppm files in the current folder to the png format
QuickTime is a powerful application that can open and convert a considerable number of video and other media files. But QuickTime is not just a application! On Mac OS X, QuickTime is a framework fully integrated with the operating system. All the capabilities of QuickTime are available programmatically to application developers. In fact, you can automate QuickTime tasks by writing a simple high level AppleScript. This is exactly what Jesse Shanks has done to create QuickTime Quick Batch to automate video conversion using QuickTime:

It is often useful to create number string with a fixed number of digits. For instance this is a common operation for generating a sequence of files from your programs. Here are some of the best ways I could find to this in various langauges. In each example I fit the number 64 in four digits:

#include <iostream>
#include <iomanip>

const int NUMBER_OF_DIGITS = 4;

int main( int argc, char** argv )
	const int number = 64;
	std::cout << std::setw( NUMBER_OF_DIGITS ) << std::setfill( '0' ) << number << std::endl;

print str( 64 ).zfill( 4 )

* http://stackoverflow.com/questions/134934/display-number-with-leading-zeros
I wrote before about a [[cheap technique|Face Reconstruction for the Masses]] that is used for face reconstruction. A few days ago I tried lo learn more about[[ BigStage|http://bigstage.com/login.do]], which is a service for creating personalized avatars. Big stage was first [[unveiled|http://www.socaltech.com/big_stage_unveils_at_ces/s-0013005.html]] at CES 2008 as part of [[Intel's keynote|http://www.youtube.com/watch?v=hM0YIS4fmio]].  I watched the segment about BigStage on YouTube and I must say that I am very disappointed. As expected, the video is full of usual PR stunts and the guy tries to boast this service as revolutionary, but the way I see it, this is just a rip off of an existing technology with a little twist to avoid patent infringement. 

The technology is pretty much the same as the one used in FaceGen -- only worse. The only highlight is that you can "automatically" put your 3d face in a picture or video. This would be really cool, but there is a catch. You can only place your 3D face in their selection of pictures and video. Again, this is very unremarkable in terms of technology. They must have used some existing computer vision software to manually resolve the 3D position of the face in those select assets, and then they essentially playback the 3D data to synch in your avatar. Lame.
While you can use most Boost libraries by simply including the appropriate header files, some libraries must be linked explicitly against your program. On Windows using Visual Studio, the Boost libraries use a clever automatic linking mechanism that makes the linking process completely transparent as long as you build Boost correctly and the compiler knows where to find the corresponding binaries.

All the magic happens in a single header file called {{{auto_link.hpp}}}. This file uses several macros to resolve the name of the library:

For example, the (static) multi-threaded debug version of the {{{filesystem}}} library for Boost 1.44 build for Visual Studio 2010, the values of these macros are 
BOOST_LIB_NAME = "boost_filesystem"

The actual linking is accomplished using a {{{pragma}}}:
#  endif
Most errors in building the library correctly will result in the python error: {{{ImportError: DLL load failed: The specified module could not be found.}}}
* Make sure the output binary is a ''dll''.
* Make sure that the extension of the output binary is ''pyd''.
* Make sure that the Boost Python dlls are either in your path or that you copy them in the same folder as the python module.

*If the Boost Python dlls or any of the dll dependencies not found by the Python interpreter, you still get a misleading ImportError, rather than a OS error about a missing dll.
In the past I looked at [[how to build a Python extension on Mac OS X|Compiling a Boost Python Extension in Mac OS X]]. Here I am considering how to actually build the Boost library itself. 

One of the biggest hurdles to build the Boost library on Mac OS with Python support is that the pre-built Python binary on [[www.python.org]] is 32-bit.

# When you install Python the binary is placed inside the framework in {{{/Library/Frameworks/Python.framework}}} and a symbolic link to the actual Python executable is placed in {{{/usr/local/bin}}}. We will use this fact later.
# Download the latest version of Boost Jam and place the {{{bjam}}} binary in {{{/usr/bin/}}}. Type {{{which bjam}}} on the command line and make sure that the correct version of bjam is being used.
# Download the latest Boost source and navigate to its directory.
# We need to configure the build process for Boost. Use a command line similar to
sudo ./bootstrap.sh --with-python=/usr/local/bin/python --prefix=/Library/Developer/boost_1_43_0 --exec-prefix=/Library/Developer/boost_1_43_0 --with-libraries=python

Here the most important option is {{{--with-python}}} which specifies exactly which python version we want to build with boost. Here the {{{--with-libraries=python}}} option specifies that we only want to build the Boost Python library, but you can omit this one in most cases.
# Now, we need to build the Boost library making sure to enforce a 32-bit architecture:
sudo bjam architecture=x86 address-model=32 install

At this point Boost should compile without errors!
Building OpenCV is now substantially easier with the new CMake build process. However, building the Python wrapper on Mac OS X is not entirely straightforward especially if you do not want to build the library with standard Python installation on Mac OS X. In my case, I wanted to build the OpenCV Python wrapper against my custom MacPorts build of Python 2.6, which was build as 32-bit Universal binary. Here are the steps to do so.

# Configure your build with CMake once. At this point CMake has configured your build automatically to build against the standard Python Framework found in {{{/Library/Frameworks/Python.framework}}}. This is NOT what we want!
# Switch CMake to //Advanced View// and modify the setting as follows

 {{{PYTHON_EXECUTABLE    /opt/local/bin/python2.6}}}
{{{PYTHON_INCLUDE_DIR    /opt/local/Library/Frameworks/Python.framework/Versions/2.6/Headers}}}
{{{PYTHON_LIBRARY    /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/config/libpython2.6.dylib}}}

# Configure CMake again
# Generate the project files from CMake
# Open the XCode project that was generated
# Change the Active Configuration to //Release//
# Change the Active architecture to //i386//
# Build the Project
# Now we need to copy the python wrapper library to the active Python installation with the command line:
sudo cp <OPENCV_BUILD_DIRECTORY>/lib/Release/cv.so /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/opencv
Of course, you should change <OPENCV_BUILD_DIRECTORY> with the actual directory you used to build OpenCV
XCode defines a lot of environment variables that can be used for convenience in project settings. However, there is no mention of them in XCode's documentation. To find out all these environment variables I followed a very clever [[trick|http://codeendeavor.com/archives/626]]. The idea is:
* create an empty project
* add a new run script build phase with the following command {{{env > ENV}}}
* run the project

At this point there is a file called ENV in the project folder that contains all of XCode's environment variables.

I called my project //TestXCodeEnvironmentVariables// and this is what I got:
SYSTEM_DEVELOPER_TOOLS_RELEASENOTES_DIR=/Developer/ADC Reference Library/releasenotes/DeveloperTools
SYSTEM_DEVELOPER_DOC_DIR=/Developer/ADC Reference Library
ARCHS_STANDARD_32_64_BIT=x86_64 i386 ppc
SYSTEM_DEVELOPER_RELEASENOTES_DIR=/Developer/ADC Reference Library/releasenotes
PATH_PREFIXES_EXCLUDED_FROM_HEADER_DEPENDENCIES=/usr/include /usr/local/include /System/Library/Frameworks /System/Library/PrivateFrameworks /Developer/Headers /Developer/SDKs /Developer/Platforms
GCC_PFE_FILE_C_DIALECTS=c objective-c c++ objective-c++
VERSION_INFO_STRING="@(#)PROGRAM:TestXCodeEnvironmentVariables  PROJECT:TestXCodeEnvironmentVariables-"
EXCLUDED_RECURSIVE_SEARCH_PATH_SUBDIRECTORIES=*.nib *.lproj *.framework *.gch (*) CVS .svn *.xcodeproj *.xcode *.pbproj *.pbxproj
BUILD_COMPONENTS=headers build
JAVAC_DEFAULT_FLAGS=-J-Xms64m -J-XX:NewSize=4M -J-Dfile.encoding=UTF8
SYSTEM_DEVELOPER_GRAPHICS_TOOLS_DIR=/Developer/Applications/Graphics Tools
SYSTEM_DEVELOPER_JAVA_TOOLS_DIR=/Developer/Applications/Java Tools
SSH_ASKPASS=/Developer/Library/PrivateFrameworks/DevToolsInterface.framework/Resources/Xcode SSHPassKey
VALID_ARCHS=i386 ppc ppc64 ppc7400 ppc970 x86_64
SYSTEM_DEVELOPER_DEMOS_DIR=/Developer/Applications/Utilities/Built Examples
SYSTEM_DEVELOPER_PERFORMANCE_TOOLS_DIR=/Developer/Applications/Performance Tools
SYSTEM_DEVELOPER_TOOLS_DOC_DIR=/Developer/ADC Reference Library/documentation/DeveloperTools

One of the most interesting aspects of Mac OS X are application bundles. With application bundles application dependencies and resources are tidily organized in a single entity and installing software becomes as easy as drag and drop. As the name implies, the purpose of bundles is to //bundle// the executable of an application with all the related resources and dependencies into a single package. Among other things , application bundles are an elegant solution to the so called [[dependency hell|http://en.wikipedia.org/wiki/Dependency_hell]]. In this tiddler I explain how dependencies are resolved by Mac OS X within bundles and how to use them correctly when building applications with XCode.

''What is an Application Bundle''
Applications on the Mac are really Unix folders disguised as single files in the Finder. If you open an application in the Finder and select "Show Package Contents" from the right-click menu, you can actually peek into the contents of the application bundle. You can also verify this by investigating an application in the terminal. For instance, let's take a look at Quick Time:
Wormhole:~ Gabe$ ls -al /Applications/QuickTime\ Player.app
total 0
drwxr-xr-x   3 root  wheel   102 May 20  2009 .
drwxrwxr-x+ 80 root  admin  2720 Jun  7 22:11 ..
drwxr-xr-x  10 root  wheel   340 Apr  1 14:29 Contents
Wormhole:~ Gabe$ ls -al /Applications/QuickTime\ Player.app/Contents/
total 8
drwxr-xr-x   10 root  wheel   340 Apr  1 14:29 .
drwxr-xr-x    3 root  wheel   102 May 20  2009 ..
lrwxr-xr-x    1 root  wheel    28 Dec  9  2009 CodeResources -> _CodeSignature/CodeResources
drwxr-xr-x    2 root  wheel    68 Jul 16  2009 Frameworks
-rw-r--r--    1 root  wheel  9578 Feb  3 23:38 Info.plist
drwxr-xr-x    3 root  wheel   102 Apr  1 14:29 MacOS
-rw-r--r--    1 root  wheel     8 Jul 22  2009 PkgInfo
drwxr-xr-x  149 root  wheel  5066 Apr  1 14:29 Resources
drwxr-xr-x    3 root  wheel   102 Apr  1 14:29 _CodeSignature
-rw-r--r--    1 root  wheel   457 Mar 10 13:08 version.plist
Wormhole:~ Gabe$ ls -al /Applications/QuickTime\ Player.app/Contents/MacOS/
total 11384
drwxr-xr-x   3 root  wheel       102 Apr  1 14:29 .
drwxr-xr-x  10 root  wheel       340 Apr  1 14:29 ..
-rwxr-xr-x   1 root  wheel  14634528 Feb  4 00:25 QuickTime Player

Here you can see that the actual executable of Quick Time is buried in {{{/Applications/QuickTime\ Player.app/Contents/MacOS/QuickTime\ Player}}}. 

''Dependency Resolution in Mac OS X''

The most interesting kind of application dependencies in Mac OS X are dynamic libraries, called dylibs, which are equivalent to //dlls// in Windows and //so// files in Linux. One of the most peculiar aspects of Mac OS X is how the path to dynamic libraries are resolved. Let's how the same thing is done in other operating systems:

On Windows, the only way to specify the search path for a dll is to add the path to the PATH environment variable. Of course, one can do this by setting the system's value of PATH, but a better solution is to delay the loading of dlls and then specify the value of PATH programmatically from an application as I already discussed [[here|Delay Loaded Dlls in Visual Studio]].

On Linux variants, other than modifying the PATH as in Windows, we can also set an //rpath// for an executable or library at link time, which specifies a set of search paths for finding dynamic libraries (called shared libraries in Linux). 

//MAC OS X//
These two articles describe in detail how Mac OS X resolves the paths of libraries and frameworks:
* [[CodeShorts Article|http://www.codeshorts.ca/tag/osx/]]
* [[Dave Dribin's Blog|http://www.dribin.org/dave/blog/archives/2009/11/15/rpath/]]

Those article are fairly thorough, but I will instead try to explain this b way of example. We are going to use a command line tool on the Mac called {{{otool}}} that allows to look at the library dependencies of executables and libraries. Let's look inside the {{{ls}}} command:
Wormhole:~ Gabe$ otool -L /bin/ls
	/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 123.0.0)
Here we can see that ls depends on two libraries and that specifies their full paths. This is similar to how search paths are managed in Linux. However, on the Mac there are a couple special keywords that can be used to specify the location of a libraries relatively to the application bundle:
# {{{@loader_path}}}    it resolves either to {{{/Contents/MacOSX}}} or the location of the application that loaded the executable as a subprocess
# {{{@rpath}}}               it resolved to the search paths specified in the executable

''Resolving Application Dependencies in XCode''
While I found several resources online that try to address this, all of them leave a few important details. When you build a dylib yourself, you tipically have more flexibility, but the real challenge is how resolve dependencies for existing third party libraries. I going to describe this two cases separately.

//Custom Libraries//
# When you build a dylib yourself, you have to open the target properties and set the //installation directory// to {{{@rpath}}}. The actual value for {{{rpath}}} is specified by the application that uses the library, so this method gives you the most flexibility.
# Link you dylib to your application
# create a copy files build phase and set the destination to //executable// and the path to {{{../Libraries}}}. Now assign your library to it. This will copy of the dependencies into: {{{MyApplication/Contents/Libraries}}}
# In the properties for your application target, set {{{../Libraries}}} as your run path. This way the loader knows where to look for libraries that specify {{{@rpath}}} as their installation directory.

//Third party libraries//
Here things are a little trickier, since you need to specify the rpath for the library by hand in your executable:
# Perform all the steps as before.
# Use your {{{otool -L}}} on the executable of your application (located in {{{MyApplication/Contents/MacOS}}}})
# Look through the list of dependencies and determine which ones have hard coded paths that are non standard (e.g. {{{~/Documents/MyProjects/Libraries/libboost_filesystem.dylib}}})
# Add a new //Run Script// build phase to your project
# for every relevant library found in the previous step add a line to the script of the form:

install_name_tool -change <badPath/library.dylib> <@rpath/library.dylib> $TARGET_BUILD_DIR/$TARGET_NAME.app/Contents/MacOS/<nameOfExecutable>


install_name_tool -change libboost_filesystem.dylib @rpath/libboost_filesystem.dylib $TARGET_BUILD_DIR/$TARGET_NAME.app/Contents/MacOS/MyApp

One of the things that makes C++ a difficult language, is that is extremely easy to write C++ code that is sloppy. Unless you spend the time to design properly your software, and think about your implementation in advance your code defaults to bad code. This is why C++ is not a particularly good programming language for experimentation and prototyping. 

When you use the {{{const}}} keyword in C++, you promise to the compiler that you will not try to modify the object. So, if you try by mistake to modify an object that was qualified as {{{const}}} the compiler will zap you. Now, for this design aid to work properly, you have to also qualify your class methods as {{{const}}}, when appropriate, so that the compiler knows which methods are supposed to changed the class data members and which method should leave it unmodified. Now, the problem in C++ is that methods are not {{{const}}} by default, so if you don't know about this feature explicitly, or you are too lazy to use it, your code will be of a lower quality and more flaky. Instead, if C++ made //constness// the default behavior, then your code build will break unless you design it properly and remove constness explicitly, when necessary.

It is also true, however, that if the designers of C++ made the default behavior more prone to break your build, many programmers after a while would end up disabling constness all the time without thinking about it.   
C++0x is a new revised standard for C++ that still needs to be officially approved, but is already being supported by several compilers. Here I list the compilers that have documented support for the standard.

* Visual Studio 2010 supports it out of the box
* GCC supports it only from version 4.3 onward and you need to use the additional flag {{{-std=gnu++0x}}} as explained [[here|http://gcc.gnu.org/onlinedocs/gcc/Standards.html]] 
* The Intel Compiler supports it in version 11, but you again need to set an additional flag {{{-std=c++0x}}} as explained [[here|http://www.intel.com/software/products/compilers/docs/clin/main_cls/copts/ccpp_options/option_std.htm]]
* LLVM is working to support it as explained [[here|http://clang.llvm.org/cxx_status.html#cxx0x]]

* XCode currently does not have any built-in support for the C++0x standard. I was able to compile some cutting edge code using [[type inference|The auto keyword in C++]] with the Intel Compiler 11 extension for XCode.
* It may be possible use XCode with a newer version of gcc (>4.2) as an external build tool.
* XCode 4 will use an updated version of LLVM as the default compiler and it may provide better support for C++0x 
* The Boost libraries, already implement many features of C++0x that will eventually go into the standard library, such as [[std::normal_distribution|http://www.boost.org/doc/libs/1_43_0/boost/random/normal_distribution.hpp]]. 
The {{{boost::iostreams}}} library makes it easy to create custom streams that redirect a character buffer to a low level interface. For instance, this library can be used to create a custom stream that works exactly like the familiar {{{std::cout}}}, but instead sends character data to a device through the serial port. As another non-trivial example, I used this library to create a stream that can send string to the Python interpreter from within a C++ module.

There are, however, a few caveats to be aware of. 


Say, you created a custom output stream called {{{myCout}}}, and want to send a string like follows:


myCout << "Hello World";


Unfortunately, nothing will happen! The problem is that buffer, has not been flushed, so boost will actually never call the {{{write}}} function of your //sink//. The buffer can be flushed as follows:


// method 1: use std::endl
myCout << "Hello World" << std::endl;

// method 2: use std::flush
myCout << "Hello World" << std::flush;

// method 3: trigger a flush by calling strict_sync
myCout << "Hello World";

// method 4: trigger a flush by simply closing the stream
myCout << "Hello World";


When you think about it, it does actually makes sense. When you are dealing with a bufferred stream, you typically want to accumulate as much data as possible first and then flush all at once. Boost was designed so that you can control exactly when you want to flush a buffer.
Qt's build process is cumbersome. In fact, it is not unlikely to run into pretty frustrating build problems when you work with Qt. What makes the build process extra tricky, is that some Qt's constructs, such as signals and slots, rely on non-standard language features, and as such your source must be preprocessed with custom scripts that turn your Qt code into conforming C++. The problem with this is that your compiler and linker do not actually understand some of the key features of Qt, so if something goes wrong your standard build tools are typically unable to provide indicative messages to the programmer.

Here are few common pitfalls that may break your build.

''Your Source Is Not Processed by Qt's Custom Tools''
You have to make sure the [[MOC|http://doc.trolltech.com/4.6/moc.html]] command line tool processes you C++ headers before you compile. 
* If you create you project with qmake, make sure that all your headers are specified in the //pro// file.
* If you are using VS2008, make sure that your headers are not excluded from the build in the current configuration. Right click on the header in the Solution Explorer and select //properties//.
* If you are using XCode, make sure that a build script runs the MOC on your headers.

''Your Class Declaration Is not Correct''
* Make sure that you class contains the macro {{{Q_OBJECT}}} in its declaration.
* Make sure that you declared your signals and/or slots correctly.
* I also found that if you have the {{{Q_OBJECT}}} macro, but no signals or slots are defined, you may get an error.
* Make sure that you include QT's headers for the base classes that you are using in your implementation. If you fail to do so, you will get cryptic linker errors triggered by the {{{Q_OBJECT}}} macro not being defined correctly. This a reminder as to why you should not use macros in C/C++ if you can.

''Missing Virtual Function Table''
* Make sure that your class is //virtual//. You can do this by adding the {{{virtual}}} keyword to the class' destructor. This is necessary because Qt relies heavily on polymorphism and without the virtual keyword, the compiler does not generate a virtual function table for the class and you'll get a linker error.
Unfortunately, working with Boost Python on the Mac is considerably more difficult than doing so on other platforms (e.g. Windows). Here I describe the basic steps and possible pitfalls.

''General Remarks''
First of all, you need to make sure that all the components involved in the process are compatible. These are:
* the compiled Boost Python libraries
* the Python shared library that you need in order to build your extensions
* the Python interpreter you use to run your Python scripts that import the extension module

''Python Version''
One of the major problems that you encounter on the Mac is that you typically end up having multiple python installations on your machine.

//Python Versions that Ship with Mac OS X//
Referenced in {{{/usr/bin/}}}

//Python Versions that you get by installing the //dmg// on [[www.python.org]]//
Referenced in {{{/usr/local/bin/}}}

//Python Versions Built with Mac Ports//
Referenced in {{{/opt/local/bin/}}}

Now, having multiple installations will cause endless problems, because you never know which Python version gets linked to the Boost Python library, with you program, and which interpreter you end up using to run your Python extension. Again, all this components need to match! Alas, In most cases, these various versions get mixed and matched, causing endless problems and frustration. 

//Disabling Unwanted Python Distributions//
The easiest thing to get your extension running properly is to disable all the additional Python distributions that we are not going to use for now. It is a good idea to do a Time Machine backup of your system before proceeding. We are going to get rid of all these stray distributions temporarily by simply renaming them:


sudo mv /usr/bin/python /usr/bin/__python
sudo mv /usr/local/bin/python /usr/local/bin/__python
sudo mv /System/Library/Frameworks/Python.framework /System/Library/Frameworks/__Python.framework
sudo mv /Library/Frameworks/Python.framework /Library/Frameworks/__Python.framework


Lastly, edit {{{~/.bash_profile}}} and remove the Python path from your PATH.

''Building the Libraries''
The built-in Python installation on Mac OS X won't do, because it does not provide the needed shared library to link against your extension. So, we need to compile it by hand. The easiest way to do it is by using Mac Ports. Let's build it as a universal binary to maximize compatibility.

{{{ sudo port install python26 +universal}}}

Now, we need to Mac Ports' version of Python the default one:

sudo port install python_select 
/opt/local/bin/python_select python26

It is a good idea to log off or restart the machine at this point to make sure all environment variables are updated.

Now, we are going to install the Boost libraries with Boost Python. If you have already installed Boost with Mac Ports, you should probably uninstall it to make sure that the we get a new clean build: {{{ sudo port uninstall boost }}}

At this point, you should find something to entertain yourself with, because the following command is going to take pretty long:

{{{ sudo port install boost +universal +python26}}}

''Creating a Boost Python project in XCode''

* Start by creating an empty project
* add a new target "Dynamic Library" target
* go to through the target settings (option+command+E) and proceed as follows:
** remove every reference to AppKit
** change the architecture of the target to 32-bit universal
** change the extension to //so//
** change the install path to something reasonable, like "./"
** change the header search path to: {{{ $(BOOST_ROOT) /opt/local/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 }}}

We now need to specify the dynamic libraries to link against, but unfortunately we can't yet do so because {{{/opt/local/}}} is a hidden folder! Use [[this|Show Hidden Files in Mac OS X Finder]] trick to make hidden files visible in the Finder. 

Go to {{{/opt/local/lib}}} in the Finder and drag the Boost Python Library that matches your active configuration into your XCode target.

Drag in the same way, the main Python library into your project, which is located here:


This is in fact a //dylib// even without the extension!

Now, write some Boost Python code and compile. 

If everything went fine you should be able to import your compiled library (with extension so) from the Python interpreter


//ImportError: No module named X//
* The extension of the library is not //so//
* The install directory of the dynamic library does not match its location. You can check this with {{{otool -L X.so}}}. [[Here|Installation Directory for Dynamic Libraries (DyLib) in XCode]] I discuss how to deal with these kinds of problems.
* The install directory of the Boost Python libraries does not match their location. When you build Boost with Mac Port, the installation directory is {{{/opt/local/lib}}}.

//Fatal Python error: Interpreter not initialized (version mismatch?)//
This error is likely to lead a to a crash. It occurs, when the Interpreter that is running your script does not match the version that was used to build the library. This is cause by a combination of the following problems:
* The Boost Python library was not built against the correct version of Python
* Your Extension was not built against the correct version of Python.

When Python crashes a dialog shows up asking if you want to report the problem to Apple. You can inspect this issue by looking at the generated Report. Most likely you'll see a mismatch in the paths used to refer to your Python installation. This problem should not occur if you followed the steps above correctly to make sure there is only a single active Python installation.

//ImportError: dlopen...mach-o, but wrong architecture//
At this point, your library should have built correctly, so you are sure that there is no mismatch in architectures within your library. However, if you run you extension on the wrong version of the Python interpreter, you get this.

You could use the Mac Port Python framework directly in your XCode project, but I found that even when you drop the correct version of the Python framework in your project, XCode still uses the default system framework.
When you create a //pdf// file, all the images embedded in the document are typically preserved at the highest available resolution, even though the added resolution generally does not help when you read the document on a computer screen. This is at least the default behavior of Acrobat, Tex, and most other programs that are capable of publishing //pdf// files. This is good for printing, but it often results in rather big files that are not well suited for publishing on the web. However, it turns out that Mac Preview has a neat little feature that allows you to compress all images in a //pdf// document giving you files that are often ten times as small!

Here's how is done. Open you pdf file with Preview and select {{{File->Save As->Quartz Filter->Reduce File Size}}} and voila'! 

Actually, you can do more! In the {{{File->Save As->Quartz Filter}}} combo box, you find a whole list of effects that you can apply to the images in your //pdf// file. You can even create your own filter by using Mac's //ColorSync Utility// and adding a new filter in the //Filters// section.
For a long time PC Gamers (I should say computer gamers) enjoyed much greater freedom than those who play on consoles. This can be partly explained by some [[historical events|North American video game crash of 1983]] during the early days of the industry. Here are a few highlights of what I mean by freedom:
* You can play online for free
* Games are unencumbered by show-stopping DRM
* free SDK and content creation tools for modding a game
* A thriving community of talented modders that extends the life of a game far beyond its market lifetime
* Free additional content provided by developers themselves
* Third party shops have complete creative and business control on their games. 
Unfortunately, this model is all but dissapearing now. I can think of a few chief reasons:
* The market has grown beyond proportion and is now reaching to a multi-billion dollar mass market. Hence, the market is more conservative now.
* Games on consoles sell more, despite being more expensive and restrictive. As a result the business model of console games is now burdening non-console gamers as well.
* A major slice of the game buying population is too young to remember the long-gone freedoms of PC gaming.
* These freedoms have been taken away slowly over the years, so they dissapeared without us even noticing.
A large market means that games are far more expensive to produce, competition is very harsh, and developers must be more conservative on what they develop and how they market their games. Many aspects of gaming on PCs were free before, because they were meant primarily for increasing the lifetime of a product, but now developers are largely concerned with balancing their accounts, so you have to pay for it now. Even things like a free SDK and mod tools used to be fairly cheap for a developer to share with the community and by reflection those tools helped games sell more. However, some [[unfortunate events|http://en.wikipedia.org/wiki/Hot_Coffee_mod]] combined with the stupidity (or greed) of some lawyers and politicians, forced even this apparently innocuous practice out of the picture. In a sense, developers feel more comfortable to impose greater restrictions on gamers, inspired by the console model. There is so much hustle in the industry now, that even content creation by users is being turned over its head as a money making tool in games such as Spore and Little Big planet.

Free online play was another distinctive freedom of PC gaming. Multiplayer matches added value to old games like Doom or Quake at almost no expense for the developer. Instead, online play today requires substantial investments in terms of infrastructure and maintenance. The emergence of MMOs also helped the industry shift toward a pay-per-play approach for these games cannot survive without a persistent supply of cash fueled by monthly subscriptions. Then Microsoft has been milking players on XBOX Live for quite some time, why should they let PC gamers play online for free. So it goes.

The proliferation of restrictive DRM features in PC games is perhaps the most despicable attack to gamers' freedom. Of course, mechanism for copy protection have been around for a long time, but never they reached the height of [[Spore's DRM|http://blogs.zdnet.com/hardware/?p=2617]]. 

In spite of all this, the success of Digital Delivery and independent games on channels like XBOX Live Arcade and Steam, seem to have established an apposite trend.
Most journal publications require authors to submit their manuscripts in an editable format like RTF or Word and they  employ their own professional editors to create the final document for print. On the other hand, authors of technical and scientific papers prefer to typeset their documents in Latex. Now, how can you convert a Latex document to RTF? It turns out that there is a nice little tool called {{{latex2rtf}}} that performs exactly this task and can be found here:
This tool works on almost all platforms and is used directly as a backend by Lyx.
Case conversion for STL strings can be accomplished easily using the STL transform function:
#include <algorithm>
#include <string>
std::string data = “Abc”;
std::transform(data.begin(), data.end(),
data.begin(), ::toupper);

This solution is courtesy of: [[NotFAQ|http://notfaq.wordpress.com/2007/08/04/cc-convert-string-to-upperlower-case/]]
I discussed [[earlier|Creating Platform Projects with QMake]] how to convert project files from Qt ({{{.pro}}}) to Visual Studio ({{{.vcxproj}}}). However Qt project files may often specify a nested hierarchy of projects. A good example are Qt's own sample project files used to build its own examples and demos. In this case you need to tell {{{qmake}}} to convert the {{{pro}}} pro file recursively into a Visual Studio solution. Here's how:

{{{qmake -r -tp vc mainprojectfile.pro}}}

* http://stackoverflow.com/questions/6057981/how-can-i-create-visual-studio-solution-file-from-nested-qt-project-using-qmake
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/us/88x31.png" /></a><br /><span xmlns:dc="http://purl.org/dc/elements/1.1/" href="http://purl.org/dc/dcmitype/Text" property="dc:title" rel="dc:type">Train of Thought</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.cs.ucla.edu/~nataneli/research_site/train_of_thought/index.html" property="cc:attributionName" rel="cc:attributionURL">Gabriele Nataneli</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/us/">Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License</a>.<br />Based on a work at <a xmlns:dc="http://purl.org/dc/elements/1.1/" href="http://www.cs.ucla.edu/~nataneli/research_site/train_of_thought/index.html" rel="dc:source">www.cs.ucla.edu</a>.
If you author a //pdf// document with Adobe Acrobat or a typesetting tools such as Latex, you may often end up with a document that is not truly portable. The biggest problem is that these tools may not always embed all fonts in your //pdf// file. Sometimes it is just a matter of setting your authoring software to explicitly embed all fonts, but even then you may end up with some missing fonts. This happens, notably in Adobe Acrobat among others, because these programs refuse to embed all fonts that are protected by copyright, which is a fairly common event in Windows. 

There is a solution though """--""" if you have MAC that is.

* Create your //pdf// file
* Open in //Preview//
* Select "Print..."
*  Click the //PDF// button in the lower left corner and select "Save as PDF..."
* Save the file

That's it! The new file will be  a clean and portable file with all fonts embedded. 

This tip is courtesy of [[Mac OS X Hints|http://www.macosxhints.com/article.php?story=20060203175741232]]
Qt is easily the best cross platform library for GUI development. However, the build process for Qt projects is a little cumbersome unless you rely on the tools that ship with the Qt distribution itself. 

One of the tools that comes with Qt is a cross-platform make utility called QMake, which creates custom makefiles or platform projects based on a special platform-independent makefile. In fact, Qt has special build requirement that involve the execution of a number of custom build tools, so creating a makefile or other platform project by hand is extremely difficult.

The first thing that you need to do to deal with a Qt project is to create a QMake makefile. A QMake makefile has extension //.pro// and can be created in a couple of different ways:

* Use Qt Creator to set up your project.
* Edit the makefile by hand
* Use qmake to create an empty project for you with {{{qmake -project}}}

The refence manual for QMake is found [[here|http://doc.trolltech.com/4.6/qmake-manual.html]]

Here I detail the command line for create platform project files. In all cases I assume that the current directory is the one containing the //pro// file

Visual Studio Application

qmake -t vcapp

Visual Studio Library

qmake -t vclib

XCode Project
qmake -spec macx-xcode

Darwin Makefile on Mac OS X
qmake -spec macx-g++
When you create a public/private key pair for ssh with the command
ssh-keygen -t dsa
By default the keys are saved in {{{~/.ssh}}} as:
* {{{id_dsa}}}
* {{{id_dsa.pub}}}
Then, of course, the content of the public key needs to be copied in the {{{.ssh/authorized_keys}}} file on the remote server.

Now, what if you want to managed multiple ssh connections?
You can easily specify additional hosts by creating the file {{{~/.ssh/config}}} on your machine and adding something like:
Host somehost
     IdentityFile /path/to/extra_secret_key
* http://stackoverflow.com/questions/736572/how-to-get-ssh-to-use-a-different-id-dsa
A few days ago, I went to see the movie Watchmen. I wasn't actually familiar the original graphic novel, but I was compelled to see it, because it was a highly publicized, big-budget action movie. And then again my job makes watching these kinds of movies pretty much mandatory. The movie was pretty good, so I decided to do a little research to learn more about the original graphic novel. I discovered that the main artist of the comic was Dave Gibbons. Wow, Dave Gibbons! Dave Gibbons made the art of one the greatest sci-fi graphic adventure games of all time -- [[Beneath a Steel Sky|http://en.wikipedia.org/wiki/Beneath_a_Steel_Sky]].
The most obvious feature of a debug configuration is to emit extra information the debugger needs to trace through your source code. But that is not all! A debug build also decorates your code with several consistency checks to promptly identify several sources of trouble, such as:
* buffer overruns
* use of uninitialized variables
* other sources of memory corruption.
Of course, a release build strips away all this information giving you leaner and faster executables. However, one of the most important differences is that a release configuration is built with compiler optimizations that are designed to make your binary substantially faster. Yet while these optimizations are very valuable for production builds, they will also trigger a lot of seemingly inexplicable run-time crashes if your program contains memory leaks or other sources of memory corruption. Here is the catch: these kinds of flaws are typically harmless in debug mode, so you may think that your code is correct when in fact it is not. A release build only exacerbates these flaws without really introducing new ones of its own making. It is, therefore, not uncommon to have debug and release versions that behave rather differently at run-time. 

In this tiddler I will analyze thoroughly some common causes that lead to inconsistencies between debug and release builds, and some ways to debug and remove the kind of flaws that produce them.

''Key Differences''

A Debug build defines the {{{_DEBUG}}} macro, which enables a lot of extra code in the standard library for compilation. Most of this additional functionality is designed to perform some extra checks and generate more helpful exceptions if something goes wrong. A release configuration instead defines the {{{NDEBUG}}} macro. Some functionality in the standard library uses an alternative faster implementation if this macro is defined, but for most practical purposes the most important difference is that all code that depends on the {{{_DEBUG}}} is not compiled. Many external libraries or even your own code may depend on these variables, so you must always make sure that the presence of these macros does not compromise the correctness of your code.

This is just a special case of the previous point, but it is important enough ti deserve its own explanation. Well designed code should make good use of the {{{assert}}} statement to enforce pre- and post- conditions in your routines. However, the {{{assert}}} statement becomes a NOOP in Release mode, so make sure that no useful work is done inside an assertion. Example:
Variable B;
assert( initialize( B ) );
performOperation( B );

This code ''will break'' in a Release build, because the initialization will not occur and you will end up sending an uninitialized variable to {{{performOperation}}}. A correct way of writing this is
Variable B;
bool success;
success = initialize( B );
assert( success );
performOperation( B );

The latter implementation still does not handle errors robustly and if the initialization fails in Release mode, there is nothing to prevent the error from propagating further. Yet the code will behave the same way in both Debug and Release configurations. 

''Buidling a Release Configuration with Debug Information''
While a Release build does not allow you to step through your source code by default, you can easily configure it to do so, with only a minor dip in performance.  The MSDN entry in [1] gives some useful information on how to modify the property pages of a Visual Studio project to debug a Release configuration. 

''Templatized Code''
One subtle difference between debug and release configurations in Visual Studio is that a release build typically enables incremental linking. The rationale for this is that an optimized build takes longer to complete, so it makes sense to use incremental linking to speed things up. However, if you have some heavily templatized code, you may end up with a binary that is produced by combining several inconsistent instantiations of C++ templates. The result? Completely nonsensical results or ridiculous crashes. Another common problem is that a Debug build may work fine even if the compiler got confused and produced an incorrect instantiation of a template. The worst thing that may happen in Release mode is that the variables you pass to a templatized object will undergo an implicit cast. To avoid this, never nest the template arguments of an object.    

''Release Builds Work Fine But Debug Mode Crashes''
In some cases, you may run into annoying crashes in a Debug build, but the Release build works  just fine. Or at least that is what you may think. Visual Studio //poisons// uninitialized variables and performs several consistency checks that highlight code flaws and memory corruption as soon as they occur in Debug, but a Release build might just ignore them. Yet these bugs are very real and sooner or later they will come back and bite you either as security holes or unexpected crashes in the final version of your application! 

''Linking Against the Wrong Version of a Library''
All real applications rely on external libraries to operate and in many cases their debug and release version are not binary compatible. So what happens if you link against the wrong version of the library? All hell breaks loose! Your applications, if it ever starts, will exhibit really odd bahavior, generate absurd results, and most often crash. Unfortunately Visual Studio is not able to detect these kind of incompatibilities by itself due to some limitations on how Windows manages dependencies. Therefore, the linker will never stop you from combining incompatible versions.

''Broken Dependencies''
Even if you link against the correct libraries at run time, you must also make sure that the dlls that are loaded at run-time are also correct. A mismatch in dlls it it likely to produce crashes that you may not encounter in Debug mode.

Release builds cause a lot of head scratching, but they are also a good tool to discover well-hidden bugs in your code. You always want to find bugs early and fix them quickly before they set their roots too deep in your software design. Thus, it is advisable to build your code in release mode often while you develop and not just at the very end when you want to ship your final application with all optimizations enabled. 

# [http://msdn.microsoft.com/en-us/library/fsk896zz(v=VS.100).aspx]
In Windows when you open a new application, the corresponding window may either:
# open with a predefined position and size
# open with a default position and size based on the previous state of the application

Most applications follow the second behavior. Now, the question is how to define the default size and position of the application so that the application will retain this state the next time you open it. It is very simple: //make sure that the window is not maximized//, then arrange the Window the way you prefer and close it. Voila', next time you open the application it will reopen the same way.

* The application will behave this way only if it doesn't override the default window manager
* Some applications don't remember the previous state if you close them with the X button, so you must close using {{{file->exit}}}
Dealing with dlls is always a pain. Today I am going to describe a subtle problem that can arise when you want or need to load a specific version of a dll at run time.

Say that you link your application statically with a library called //myModule.lib// and this library needs to load //myModule.dll// at run time. Now, say that you need to load a specific version of //myModule.dll// which resides in a directory that is either not found in the system PATH or should differ from the one specified in the PATH. Let's start with the first case:

//Dll Directory not in System Path//
If you just run your application, Windows will immediately come up with an error saying that the dll is not found. You could set the directory that contains your dll to the system PATH programmatically by using //putenv//, but even that won't work in this case!

In fact, If you inspect your application with the debugger, you'll discover that the exception is raised by the OS loader and that you application does not even reach the main entry point. What can we do then? In Visual Studio there is a linker option called //Delay Loaded Dlls// that can do the trick. In the property pages of your project go to ''Linker->Input'' and in the entry named ''Debug Loaded Dlls'' enter the name of your dll - //myModule.dll// in this case. This option defers the loading of the specified dlls until needed, so it will allow the OS loader to run your program without explicitly checking for the problem dlls. Of course, if you make a library call that requires the dll and still fail to set system PATH appropriately then you will get the same error message as before. However, using this option, you have a chance to run your program long enough to properly set the system path without crashing your application. Naturally, your invocation to putenv should be one of the first things in //main//

//Dll Directory Must Be Different than the One Specified in the System Path//
The only gotcha here is to make sure that, when you modify the system PATH with putenv you append the current system PATH at the end and not at the beginning, otherwise the system wide settings will take precedence

* http://msdn.microsoft.com/en-us/library/151kt790(v=VS.90).aspx
Dealing with dependency hell is always difficult  """--"""very much so if you are in Windows. On the other hand, while Windows does not really provide good ways to alleviate the difficulty of resolving dependency problems (unlike [[Mac OS X|Bundling Dylibs Correctly on Mac OS X and XCode]]), there is an excellent application on Windows for exploring the dependencies of a given executable or dll in a nice and user-friendly GUI. The application is called [[Dependency Walker|http://dependencywalker.com/]] and is absolutely fantastic! 
There is a bug in Mac OS X that causes all icons on your desktop to disappear. This problem seems to be triggered by using multiple monitors setups. On top of that, when this happens, it is also impossible to invoke the context menu on the desktop. The items that are supposed to be on the Desktop are still available in the Finder under the Desktop folder. 

Apparently, the problem is caused by the Finder entering some kind of inconsistent state. As suggested [[here|http://forums.macrumors.com/showthread.php?t=332959]], this problem can be solved by forcing the Finder to restart. You can do so by typing {{{killall Finder}}} in the Terminal.
Boost [[Smart Pointers|http://www.boost.org/doc/libs/1_42_0/libs/smart_ptr/smart_ptr.htm]] are a great resource for C++ programmers that alleviates the complexity of dynamic memory allocation and the use of pointers. If you allocate memory dynamically, you must eventually give it back to the operating system by using //delete//, otherwise you'll end up with a memory leak. Doing so systematically is a pain, especially if multiple objects share a reference to the same block of memory. If you end up deleting some memory and other objects still reference to it, those objects might actually try to access memory that is not available anymore and all kind of bad things will ensue. Smart pointers simplify this by following two basic principles:
* if you allocate memory dynamically you must release it eventually """--when you don't need it anymore""".
* the last object to reference the pointer to dynamic memory is responsible for the clean up. This design concept is called //resource acquisition is initialization//.

A smart pointer keeps track of all references to it through the copy constructor, which adds one to a reference count every time the object is passed by value to another scope. The reference count is decreased every time a shared pointer owning the object goes out of scope. The referenced object will get disposed when the reference count hits zero.

One of the problems with smart pointers is when you need to reference the //this// pointer of a class from within a method. This occurs for instance if a class owns an object that object in turn needs to reference the class it belongs to. You cannot simply create a smart pointer to the this object and pass it around. 

Let's explore this concept. Consider the following toy program


#include <iostream>
#include <boost/shared_ptr.hpp>

// forward declaration
class B;

class A


		std::cout << "A's destructor" << std::endl;

	void setB( boost::shared_ptr< B > myB )
		_myB = myB;

	boost::shared_ptr< B > _myB;

};	// class A

class B

		std::cout << "B's destructor" << std::endl;

	void setA( boost::shared_ptr< A > myA )

		myA->setB( boost::shared_ptr< B >( this ) );

		_myA = myA;

	boost::shared_ptr< A > _myA;

};	// class B

int main()
	boost::shared_ptr< A > sharedA( new A );
	boost::shared_ptr< B > sharedB( new B );
	sharedB->setA( sharedA );

}	// main


Here we a have two classes A and B that cross reference each other. When we add to B a shared pointer to element A, B's method in turn will pass a shared pointer to the //this// pointer to A.  In main, we instantiate each class and call B's mutator to exercise this behavior. When we ran the program we will get

B's destructor
A's destructor
B's destructor

The application crashes because we have two distinct shared pointers referencing our instance of B. The first is the instance created in {{{main}}} and the second is the shared pointer created in {{{B::setA}}}. These shared pointers of course have distinct reference counts, and as a result you always end up destroying the reference to B twice, thus triggering a crash.

To get around this problem, we use a nifty concept in Boost called [[enable_shared_from_this|http://www.boost.org/doc/libs/1_42_0/libs/smart_ptr/enable_shared_from_this.html]]. This object allows you to create shared pointers to {{{this}}} from within a method that don't clash with other shared pointers to the same instance of an object.

We modify the code as follows


#include <iostream>
#include <boost/shared_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>

// forward declaration
class B;

class A


		std::cout << "A's destructor" << std::endl;

	void setB( boost::shared_ptr< B > myB )
		_myB = myB;

	boost::shared_ptr< B > _myB;

};	// class A

class B : public boost::enable_shared_from_this< B >

		std::cout << "B's destructor" << std::endl;

	void setA( boost::shared_ptr< A > myA )

		myA->setB( shared_from_this() );

		_myA = myA;

	boost::shared_ptr< A > _myA;

};	// class B

int main()
	boost::shared_ptr< A > sharedA( new A );
	boost::shared_ptr< B > sharedB( new B );
	sharedB->setA( sharedA );

}	// main


When we run this code, the program does not crash, but unfortunately we also find that ''none of the destructors is ever called''. The problem is that A stores the reference to B in a shared pointer, which also tries to own the instance it refers to. As a result, there are multiple objects trying to own references to each other and none of the reference counts will ever hit zero. The solution is to use instead a Boost {{{weak_ptr}}} which is a non-owning observer of shared pointers. In other words a weak pointer will simply store a reference to another shared pointer without ever trying to delete the referenced object. Putting all this together, we can write the final correct version of this code.

#include <iostream>
#include <boost/shared_ptr.hpp>
#include <boost/weak_ptr.hpp>
#include <boost/enable_shared_from_this.hpp>

// forward declaration
class B;

class A


		std::cout << "A's destructor" << std::endl;

	void setB( boost::shared_ptr< B > myB )
		_myB = myB;

	boost::weak_ptr< B > _myB;

};	// class A

class B : public boost::enable_shared_from_this< B >

		std::cout << "B's destructor" << std::endl;

	void setA( boost::shared_ptr< A > myA )

		myA->setB( shared_from_this() );

		_myA = myA;

	boost::shared_ptr< A > _myA;

};	// class B

int main()
	boost::shared_ptr< A > sharedA( new A );
	boost::shared_ptr< B > sharedB( new B );
	sharedB->setA( sharedA );

}	// main
While Subversion (SVN) is generally a very good version control system, it gets somewhat annoying to use when the repository is stored on a server through an SSH connection. The key problem is that with an ssh connection all SVNcommands trigger multiple password prompts (sometimes as many as 6!). Sure, security is important, but this is really too much for most people. There is in fact a way to save the public/private key pair permanently, so that you don't have to enter the password every time. This page documents how to do so in Windows in conjunction to the excellent SVN client TortoiseSVN: 

There is a common registry trick that enables editor guidelines in Visual Studio as described [[here|Visual Studio 2008 Tweaks]]. However, this tweak does not seem to work consistently in Visual Studio 2010. To achieve the same result, you need to install a special extension before tweaking the registry.

* Go to Tools->Extension Manager->Online Gallery
* Search for Editor Guidelines and install the corresponding extension
* Go to the registry key {{{HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor}}}
* Add the string {{{Guides = RGB(128,128,128) 80}}}
The '80s were a very prolific time for home computers. Many platforms were released in that decade and each one had its own dedicated group of followers. On the flip side, a large number of these platforms were gone by the early '90s as the so-called IBM compatible PCs started to dominate the market. For instance, I grew up playing and programming on two glorious platforms, the Commodore 64 and the Commodore Amiga, but I had to move on when Commodore went bankrupt in 1993 and I slowly became a PC proselyte like many others. 

Some old users still dearly remember these machines and among them a group of enthusiasts started a few projects to preserve these platforms for posterity. So in the late '90s and early 2000s several open source emulators were born. Developing an emulator is a monumental task and it takes a great deal of time and effort. Now, the question is, why would someone do something like that? Certainly there isn't any monetary gain in developing a sophisticated piece of software that reproduces an essentially obsolete piece of computer history. But is that very history that it is worth saving! Thus, I can think of three main reasons why someone would like to make an emulator:
* Allow old software to run on modern platforms even when the original hardware is not working anymore.
* Preserve a manically detailed description of the original hardware. In fact, no amount of documentation could provide a comparable amount of insight into the innerworkings of a complex hardware platform as a full-blown software emulation. 
* Preserve a personal memory of a long-gone platform. Why not? If you are going to spend sleepless nights working on a substantial software project, why not enjoy it?

One of the most remarkable emulators is [[WinUAE|http://www.winuae.net/]], a spin-off for Windows of the original Unix Amiga Emulator. Nowadays, WinUAE can emulate the orginal Amiga and its variants with great accuracy and thousands of original applications and games can be run on it without hiccups. What makes WinUAE so remarkable is that the Amiga is an extremely complex machine and good part of its hardware is largely undocumented. According to Wikipedia, the success of the Amiga emulator convinced other developers that making a faithful emulator was indeed possible:

//The realization that a useful Amiga emulator could be written contributed to an increase in enthusiasm about emulation, which started or sped-up efforts to write emulators for other and often less popular computer and electronic game architectures.//

However, these apparently innocuous efforts sparked a lot of controversy. Where are you going to get the old software and games to use with the emulator? If you own the original discs, you may use a tool to dump its contents into a ROM for the emulator. Unfortunately though, many commercial applications have a copy protection mechanism in place, so dumping the software means that you have to crack its protection, which is illegal. Then, as I said before, the goal of emulators is to preserve software for posterity. There is no preservation, if each person makes personal dumps of the software he owns, which is invariably only a small fraction of all the software that was available for the original platform. However, you cannot share any of these applications, because most of them are still covered by copyright. You can't do that even if the original company that published the software is not in business anymore. Perhaps it has to do with a legal system that is full of nonsense, and many greedy individuals that enjoy suing others, but this is not for me to judge. If you are lucky and determined, you may find the copyright owners or the original developers and obtain a written permission, but in most cases this is almost impossible. Ironically, in some cases, the original developers would have to authorize the distribution of a cracked version of their game, because the original could not be copied or dumped into a ROM. Of course, most companies would not allow that, especially if they are still in business. On the other hand, there are a few notable groups that managed to collect hundreds of ROMs legally, such as [[Back2Roots|http://www.back2roots.org]]. 

In any case, no one is going to earn cash on ancient software, and you are not going to help the developers either. Therefore, in a sense, there isn't even any ethical drive to get //legal// ROMs. When you buy an mp3, a movie, or or game instead of dowloading it from your favorite p2p channel, you likely do so because you think that those hard working, talented guys that made that content deserve your support. But here the distinction between legal and non-legal is only a lawyers' contrived paradox and not something that carries any real value.

Then there is another troublesome issue. The emulation scene used to be a niche phenomenon. There aren't indeed many users that would like to play very old games or work with old productivity applications. Most importantly it is a non-profit effort. Nonetheless, after publishers discovered that some people would actually enjoy playing old games, they started putting a price tag on them offering a compromise version of their games that """--with some effort--""" would run on modern machines without the emulator. Now, I appreciate the kind of updated versions of old games you find on services such as XBOX Live! Arcade, but paying ten dollars for an essentially unmodified obsolete game is too much!
While using a Mac is easy on the surface, there are many things that are fairly straightforward to do in other systems, but tremendously hard to do in Mac OS X. One of them is setting environment variables globally for all applications.

''Shell Profile''
Setting an environment variable persistently for the terminal is relatively easy. Just create the file  {{~/.bash_profile} (or edit it if it exists already) and append to it an export command like this
export NAME_OF_VARIABLE=ValueOfVariable
now restart the terminal and type {{{env}}} to verify that the environment variable was set correctly. However, __this method will not work for GUI applications.__

''GUI Applications''
A better way to set environment variables for GUI applications is to edit the file {{{~/.MacOSX/environment.plist}}}.

//The Hard Way// 
touch ~/.MacOSX/environment.plist # use this only if the file does NOT exist already!
open ~/.MacOSX/environment.plist

This should invoke the //plist editor//. Use the GUI to add an item of type string corresponding to your environment variable.

// The Easy Way//
Download and install this little utility called [[RCEnvironment|http://www.rubicode.com/Software/RCEnvironment/]]. Once you install a little item called //env// will show up in your //system preferences//. This tools allows you to set environment variables without having to mess with the terminal. As you can imagine, this tools is simply an interface for editing the {{environment.plist}} file of before.

Whichever way you use to set the environment variable with this method, __you must log out to apply the change__.

Unfortunately, __this method works only for applications run from the terminal or the Finder__. More importantly this method will not work for applications invoked from Spotlight.

''All Applications at All Times No Matter How You Invoke Them''
Praise goes to Steve Sexton for discovering [[this method|http://www.digitaledgesw.com/node/31]] that is apparently undocumented. In order to set an environment variable //globally// you have to make it visible to the parent of all proceses, which called {{launchd}} in Leopard. You do this by adding a line to {{{/etc/launchd.conf}}}
setenv NAME_OF_VARIABLE ValueOfVariable
The powerful network analyzer [[Ethereal|http://www.ethereal.com]] is being superseded by [[Wireshark|http://www.wireshark.org]]. Apparently they had to change the name for some legal issues. Annoyingly, you do not find any mention of this on the now rather outdated website of Ethereal, which is still available. 

* installing Wireshark on Snow Leopard is not particularly straightforward and requires some manual work:
** if you use the pre-built binary, you should follow [[these|http://michaelgracie.com/2009/10/13/getting-wireshark-running-on-os-x-snow-leopard-10.6/]] instructions.
** you can also install Wireshark using MacPorts, but watch out that it has a lot of dependencies and it may take a while!
MATLAB comes packed with very powerful visualization capabilities. However, exporting figures for print and other purposes is not always straightforward. Here I collect some advice on how to get best results.

''Save as "fig" to retain your original MATLAB figure''
MATLAB stores your figures internally as vector graphics, so that you can scale them and modify them without losing any resolution. In order to preserve all the properties and detail of your original MATLAB figure you must save it in the native {{{fig}}} format. In fact, the native format, not only saves your figure, but also all the data you used to generate it, so you can actually modify the way your visualization altogether. Saving your image in almost any other format produces a raster image. 

''Almost any other format produces a raster image''
Confusingly, even if you save your image in a format that supports vector art, such as {{{pdf}}}, {{{eps}}}, or {{{emf}}} you still get a raster image embedded in the file by default. This is a problem if you meant to export your figure for print media, which typically requires relatively high resolutions for best results. To make matters worse, plots and graph often have lots of thin lines that will disappear without sufficient resolution.

''Set the proper dpi resolution for print media''
Raster graphics is still viable for print media as long as the resolution is sufficiently high. To set the resolution in MATLAB go to {{{Export Setup...->Rendering->Resolution (dpi)}}} and then press {{{Export}}} to actually save the file. Although your image is saved as a raster it is still advisable to use a print friendly format like {{{eps}}} or {{{pdf}}}.

''How to export actual vector art''
The only non-native format that is saved as vector graphics by default is the Illustrator format {{{ai}}}. However, this is hardly an ideal format for exporting MATLAB graphics, since the resulting Illustrator graphics does not generally look very good. There is, however, one way to force MATLAB to export vector graphics for any of the formats that support it. Here is how; go to {{{Export Setup...->Rendering->Custom renderer}}} and select {{{painters (vector format)}}}. Also remember to enable the corresponding check box. Now save in a format that supports vector graphics like {{{pdf}}}. 

''Why you should not always use vector graphics''
Vector graphics is great for print media, but there is one shortcoming. Complex MATLAB figures, such as 3D visualizations, are often composed of a very large number of individual polygons, which make the resulting vector art very slow to render. For instance, If you open one of such figures in Acrobat Reader, it will take along time to show and it will typically make it very slow to scroll or zoom the document. Remember that vector art is not designed for rendering complex 3D graphics!

''Make your lines thicker for print media and presentations on big screens''
By default all lines in your figures are very thin. This is fine when you see your graphics on a computer screen, but is generally problematic for any other medium. Lines that are too thin are generally not visibile when you print your document, even if you saved in a vector format. Also, thin lines tend to disappear if you saved your figure in a raster format and need to scale it down. In MATLAB you can make lines thicker by:
# Going to  {{{Export Setup...->Lines}}} and tweaking the options to make all lines in your figure thicker
# Editing individual line properties of your figure from the figure property pages

* If you want to show your figure in PowerPoint under Windows, the best format is {{{emf}}} with vector graphics enabled. Beware that this format can really slow things down if your figure is very complex.
* If you want to show your figure in Keynote on Mac, then {{{pdf}}} is the best format, since it is supported natively.
The topic of 3D reconstruction of faces from photographs is a serious research challenge and a largely unsolved problem in computer vision. There are however a few companies that use some cheap ad hoc techniques that can give you a 3D face that looks vaguely similar to the one portrayed in the photograph. Two notable attempts are:
* [[FaceGen|http://facegen.com]]
* [[BigStage|http://bigstage.com/login.do;jsessionid=TYyYA111Wizf+JM8LK2NQg**.Node1]]

Results are not very good if you were looking for a realistic face model, but they work decently for some applications. For instance, FaceGen is used in several top notch games, such as Fallout 3 and Oblivion. 

The idea behind these tools is to create a //parameter space// by putting together a large collection of 3D faces. Each face differs from the others by some peculiar aspect, such as age, ethnicity, gender, height of forehead, type of chin, etc. These 3D assets are carefully modeled by artists. Once you have all these 3D models you can get any face //in between// by interpolation. The technology to make this possible is rather simple and the quality of the results depends primarily on the quality of the 3D faces. You can do even more by defining a parameter space of face textures.

In order to find a face that looks similar to your photograph you need to do 2 things:
* solve an optimization problem that finds the parameters that give a face that matches the features of the photograph.
* extract a texture from the photograph and apply it to the 3D model.  
While many front ends for SVN might tell you the URL of the repository directly in a GUI, it is not immediately obvious how to find it out from the command line.

//Straight Method for Regular Users//
Navigate to the path of the folder under version control and type {{{svn info}}}. The entry called {{{Repository Root}}} tells you the base URL of the repository.

//Sneaky Method for Hackers//
Navigate to the {{{.svn}}} folder of your project under version control and open the file {{{entries}}} in a text editor. One of the lines in the file specifies the URL. 
In addition to the regular permission attributes that all files have under a UNIX like operating system, Mac OS X can lock files in an additional way. You can see this property in the Finder, by selecting a file and looking at its properties with //Get Info// (Command + I). When files are locked you typically cannot modify them in the terminal even with a {{{chmod 777}}}. You can, of course, remove this attribute from the Finder by unchecking the corresponding check box in the file inspector, but this is cumbersome, when you need to deal with a lot of files. You can also modify the locked attribute from the terminal. 

{{{ chflags nouchg }}}

{{{ chflags uchg }}}

To unlock an entire folder, you can also use the //recursive// option: {{{ chflags -R nouchg *}}}

A handy solution is to add some aliases to your {{{.bash_profile}}} 


alias unlock="chflags nouchg"
alias lock="chflags uchg"

In a Unix system you always encounter several folders named {{{etc}}}, {{{var}}},{{{usr}}} and so on. Who came up with these names? Are there any guidelines of how to put stuff in them? It turns out that these names are the result of a carefully though out standard called the //File System Hierarchy Standard. Here is the most useful exerp from the official standard:

bin 		Essential command binaries
boot 		Static files of the boot loader
dev		Device files
etc 		Host-specific system configuration
lib		Essential shared libraries and kernel modules
media		Mount point for removeable media
mnt		Mount point for mounting a filesystem temporarily
opt		Add-on application software packages
sbin		Essential system binaries
srv		Data for services provided by this system
tmp		Temporary files
usr		Secondary hierarchy
var		Variable data

* http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
* http://www.pathname.com/fhs/pub/fhs-2.3.pdf
Once in a while, Visual Studio may generate some spurious warnings complaining that it cannot find some PDB files. The problem is due to the fact that intermediate //object// files used by the incremental compiler store the path to their corresponding PDB files. If the linker can't find the PDB file corresponding to a given object file, then it will complain with that pesky warning. 

This problem typically occurs in large solutions that contain many projects. 

//Inconsistent Project Settings//
You should make sure that all projects emit their intermediate files in a local directory relative the project instead of the global solution directory. If you do that, you'll ensure that the relative paths used by the compiler for the PDB references are always correct. 

Here's how to do it. Go to the property pages of each project and make sure that you have the following:
* ''output directory'': {{{$(ProjectDir)$(ConfigurationName)}}} 
* ''intermediate directory'': {{{$(ConfigurationName)}}} 

This is certainly not the only way to resolve this problem, but it still a good idea to unify the way all projects in a solution emit their intermediate files.

//Broken Project Dependencies//
If you have multiple project that depend on each other, the linker expects that the PDB files of a dependency are created before the object is linked. Make sure that the project dependencies that you specify in the solution properties match the actual dependencies specified in the linker properties for each project. 

//Inconsistent Debug Information//
Visual Studio can generate debug information in a variety of different formats depending on how through you want your debugging sessions to be. You can set the debug information format in {{{Project properties->C/C++->General->Debug Information Format}}}. Typically more thorough debug information will make your builds slower, so you don't want to abuse this option. Whatever choice you make make sure that all project in a solution use the same debug information format, otherwise the linker will fail to properly link binaries that use different formats and you get a lot of unwanted warnings.

//Build Errors//
Say that project A depends on project B and project B fails to compile. If you are running a multithreaded build and you are specifying explicitly in the linker settings the dependencies between your library projects, the linker will still try to link project A to project B and it will succeed because the last build of project B was successful and it is still compatible with A. However, the linker will find that the signature of the PDB file of project B is not up-to-date and will trigger a warning.  That is of course not supposed to happen, but the linker cannot always correctly resolve these kind of dependencies.

* Open one of your //object// files and inspect it with a hex editor. Now, search for the string {{{pdb}}}. One of the search hits should give the absolute path where Visual Studio is going to look for the PDB file corresponding to that object. 

* http://cldoten.wordpress.com/2009/07/01/vs2008-fixing-the-warning-pdb-vc90-pdb-not-found/
* http://msdn.microsoft.com/en-us/library/5ske5b71(v=vs.71).aspx
At times Visual Studio may take a very long time to start up. This typically happens, because Visual Studio tries to load a large number of unnecessary files when it is opens. A quick and dirty solution to fix this problem is to clear Visual Studio's list of most recently used (MRU) files from the registry. For Visual Studio 2008 the MRU values are stored in 


Under this key, you should find a list of string values that point to the projects that are loaded at startup. Make sure Visual Studio is closed and go ahead and clear them all!

Next time you open Visual Studio it should be much faster!

* For other versions of Visual Studio just replace {{{9.0}}} with your version.

* [[http://richarddingwall.name/2009/08/24/does-your-visual-studio-run-slow/]]
When you edit a paper in Latex (especially with the IEEETrans format), you sometimes end up with unexpected vertical gaps in the document between sections or around titles. These gaps are often due to the fact that the Tex typesetting engine tries to arrange the layout of your text so that it looks correct according to the style guidelines of your class and it conforms as well as possible to the constraints specified by the user.However, when these constraints are too tight or in conflict with each other, you may end up with unexpected results in your final document. A common case, is when you want to force a figure to be placed at a particular position in the document, but the figure is too big. The latter occur when you use the {{{[H]}}} option for a figure or the {{{here definitely}}} option in the Lyx settings for a figure float.
Here are a few interesting topics that I would like to write about in the near future:
* Knowledge in the Internet Era
* David Allen and Getting Things Done
* --Accepting Digital Delivery--
* The Scourge of Social Networks
* The DRM Dilemma
* The Paradox of Piracy and How it Helps Preserve our Past
* The Triumph of Digital Delivery
* The Success of Steam
* World of Goo
* --Emulation--
* Exception to the Rule: Independent Games
* Garret Lisi and the exceptionally simple theory of everything
* Game Remakes
* 2D vs 3D in Games
* --Abusing Machine Learning--
* The Flock that Follows Business Trends
* Google's Secret: Lots of Data
* Thoughts on Free Software
* --Quake Live--
* Agressive DRM to Limit Early Game Piracy
* Why is appropriate to reference Wikipedia for provocative writing.
* How green has become an expedient to sell products.
Most game demos these days are very short. Too short I would say. But this is not want I want to talk about. The purpose of a demo is to give you a taste of the full game, to get you excited about it, and of course to give you an incentive to buy the full game when it comes out. Now, say that you played through those few levels available in the demo, you liked it, and decide to buy the full game. Well, you will have to play those couple of levels all over again! Hurray, everybody likes redundancy...not! This is not much of a problem anymore, since game demos are so short, but I still believe that it ruins the excitement of starting a game anew. It was thus a pleasant surprise when I bought the full version of [[World of Goo|http://2dboy.com/games.php]], after having played the demo, to realize that all the demo levels were already cleared. I would like to see this more often!

Perhaps the best scenario is when the demo features some exclusive content that you won't find in the full game, but I  understand that you can't do this for every kind of game. 
Many games today are capable of producing very compelling visuals, but rarely the player has time to idle around and enjoy the view while playing. Also, sometimes paradoxically, the games with the best visuals are fast paced action games that give the player very little time to sit back, relax, and look at the artistry that makes modern titles so immersive and beautiful. The guys at [[Dead End Thrills|http://deadendthrills.com/]] addressed this issue by putting up a website dedicated entirely to compelling in-game photography. They collect the most artistic, good looking, and compelling screenshots of the latest games and some of the images up there are really beautiful. 
A well done PDF document can display an active table of contents that you can click to navigate through the various sections of the content. Now the question if you can generate a PDF file with this feature using Latex or Lyx and without using the expensive Adobe Acrobat Professional. The answer is yes and here's how. Add the following to the preamble of your document
\hypersetup{pdfborder={0 0 0}} 

The first line activates the feature (called bookmarks), while the second line ensures that the bookmarks don't display with an ugly red border around them
TortoiseSVN on Windows is pretty much unparalleled in terms of ease of use and flexibility, when it comes to version control with Subversion. Unfortunately, on Mac OS X, there isn't anything nearly as good. Yet, there is a free project that attempts to reproduce some of RortoiseSVN awesome shell integration capabilities. The program is called [[SCPlugin|http://scplugin.tigris.org/]].

NOTE: Follow the installation instructions carefully, especially if you are on Snow Leopard! Also remember to restart your system to see the changes.

There are also other alternatives on Mac, but the best ones are not free.

* [[RapidSVN|http://rapidsvn.tigris.org/]]

* [[SmartSVN|http://www.syntevo.com/smartsvn/index.html]]

Frankly, I believe that Versions is the best one, but it is also pretty expensive!
Typically to get the IP address of the machine on UNIX like operating systems, including Mac OS X, you use the command {{{ifconfig}}}. Unfortunately, this command prints a ton of information and finding your IP address in all the text is not immediate. Here is a little command line that parses the output of ifconfig and only displays your IP4 address (it can show you more than one if you have more than one network interface):


ifconfig | grep 'inet '  | grep -v | cut -d' ' -f2


This command line should work on most UNIX flavors, even though the invocation of {{{ifconfig}}} might be different, as explained [[here|ifconfig on UNIX Flavors]].
[[Graphviz|http://www.graphviz.org/]] is an excellent library developed by AT&T for laying out even the most complex large scale graphs. However, it is such a complex library that it is really difficult to built it on your own, especially considering that it depends on a very long list of other libraries. Luckily, there are pre-built binaries available for all major platforms. Now, the Windows version installs where you expect as specified in the installer, but the Mac version comes with no documentation and silently installs the libraries in 

{{{ /usr/local/include }}}


{{{ /usr/local/lib }}}
This is an interesting fact about Windows. It turns out that the Windows command line supports a command that works almost exactly like Unix' {{{grep}}}. It is called {{{FindStr}}} (the capitalization as usual does not matter in Windows) and its usage is also very similar:

C:\>dir | findstr "DIR"
11/25/2009  10:06 AM    <DIR>          .TemporaryItems
11/25/2009  07:56 PM    <DIR>          .Trashes
10/19/2009  12:04 AM    <DIR>          09e321b25eb478c2c14c
10/19/2009  12:03 AM    <DIR>          477b8c9243662c17babdcd5ccaeec8cf
03/31/2010  03:52 PM    <DIR>          Autodesk
10/12/2009  03:02 AM    <DIR>          b82f7ead666625e08a
05/08/2010  04:23 PM    <DIR>          Backups
05/18/2010  02:21 AM    <DIR>          Config.Msi
10/10/2009  04:07 PM    <DIR>          Documents and Settings
10/10/2009  04:19 PM    <DIR>          f8c0232d1400e4f483
10/10/2009  04:18 PM    <DIR>          Intel
03/09/2010  05:25 PM    <DIR>          MoTemp
04/05/2010  10:30 PM    <DIR>          NVIDIA
02/19/2010  12:04 AM    <DIR>          OpenCV2.0
03/01/2010  02:19 PM    <DIR>          OpenCV2.0_Custom
05/18/2010  02:15 AM    <DIR>          Program Files
02/21/2010  04:27 AM    <DIR>          Python26
02/19/2010  06:47 PM    <DIR>          Qt
05/14/2010  11:57 AM    <DIR>          Riot Games
03/02/2010  07:12 PM    <DIR>          Temp
10/11/2009  02:21 AM    <DIR>          WinDDK
05/18/2010  02:21 AM    <DIR>          WINDOWS 
While Windows allows a user to modify most administrative settings of the system directly from the Control Panel and other readily accessible dialogs, some of them are kind of hidden and hard to find unless you know where to look. Clearly Microsoft conceals these advanced features to prevent novices from making a mess, but more often than not even experts may have a hard time to figure out how to accomplish certain tasks when they are not very well publicized even in official technical references. One of these features is {{{regedit.exe}}}, which is the main tool for accessing and modifying the Windows Registry, but most tech geeks typically know about it already. Another very useful tool is the Group Policy Editor, which enables administrators and power users to tweak fine grained features of Windows. As usual this editor can be invoked from the //run// dialog by typing the proper command name, which is in this case: {{{gpedit.msc}}}. 

This [[page|http://msdn.microsoft.com/en-us/magazine/cc188951.aspx]] explains among other things how to accomplish a few useful tasks with the Group Policy Editor. 
If you press the HOME or END key on your keyboard in a Bash Shell, nothing typically happens. You can emulate the same functionality using the emacs shortcuts:

CTRL + A.......HOME
CTRL + E.......END

On Mac OS X, you can even map these commands to the HOME and END keys. Go to Terminal -> Preferences -> Settings -> Keyboard and edit the mappings:

HOME -> \001
END -> \005

On MacBook you trigger these keys using:
Fn + LEFT.......HOME
Fn + RIGHT.....END
Whenever a Windows system gets hijacked by malware, one of the first symptoms is typically a broken Task Manager. A common way to disable the Task Manager is by setting a registry entry that instructs the operating system to block it through a Windows Policy. If this is in fact what is affecting the Task Manager, at least you would get an error message telling you that the Task Manager is indeed disabled. Windows Policies can be modified directly by using the [[Group Policy Editor|Group Policies in Windows]]. This [[page|http://ask-leo.com/why_is_my_task_manager_disabled_and_how_do_i_fix_it.html]] explains how to do so.

This is, however, the most obvious way of hijacking the Task Manager, but there is also another much sneakier way to do it. Unfortunately not even common anti-virus programs know how to deal with this subtle trick and you may often find yourself with a broken Task Manager even after you removed all traces of malware.   

There is a little known feature in Windows that was originally designed to let administrators debug applications when they start up called //Image File Execution Options//. These options can be enabled by adding a registry key with the name of the target executable under

{{{HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options}}}

The values supported by this key are described in detail [[here|http://blogs.msdn.com/b/junfeng/archive/2004/04/28/121871.aspx]]. The most important however is the //string// value {{{Debugger}}}. If you create a string value with this name and set it to some executable, say {{{svhost.exe}}}, the latter will be used to execute the target executable (which is passed as a command line argument). In the case of a hijacked Task Manager you could have a registry key/value pair like:

* {{{HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\taskmgr.exe}}}
* {{{Debugger=svhost.exe}}}

In this example, every time you press the three-finger salute CTRL+ALT_DEL the system will run the command {{{svhost.exe taskmgr.exe}}}, which amounts to nothing, since the Task Manager is not a service!

To fix the problem, just delete the {{{Debugger}}} value!

If the Task Manager was hijacked using this method:
# The executable {{{C:\Windows\System32\taskmgr.exe}}} should not be locked for writing.
# Renaming the executable (or a copy) should be sufficient to circumvent the hijack.

* [[http://ask-leo.com/why_is_my_task_manager_disabled_and_how_do_i_fix_it.html]]
* [[http://blogs.msdn.com/b/junfeng/archive/2004/04/28/121871.aspx]]
* [[http://blogs.msdn.com/b/greggm/archive/2005/02/21/377663.aspx]]
A friend of mine is really interested in Holographic technologies and he has some cool ideas on how improve upon what is already available, which is not much. He prompted me to look at the [[work|http://www.lucente.us/pubs/CG97/CG97.html]] by Mark Lucente at MIT and I must say it is indeed very cool stuff. 

There are many questions of interest and whole slew of unsolved problems.  There is one problem in particular that is actually quite relevant to graphics. It appears that you can't just take a stereo pair and render it into a hologram;  you really need to render spacial information in a form that is consistent with the physics of holography. This problem can be formulated as a twist of the familiar concept of ray-tracing in grtaphics. 

Unsurprisingly, my friend """--who also works in graphics--""" is super excited about these things. After a few unsuccessful attempts he managed to make me excited too!
Go to {{{project properties->linker->advances->target machine}}} nad pick an architecture from the list. Note that the default is {{{Not Set}}}
In order to get good performance out of Python code it is very important to iterate over //iterable// objects in the proper Pythonic way. This means that you should take advantage of Python's many facilities to operate on iterables, such as generators, expressions, lambda functions and so on. Avoid indexing iterables directly a la C/C++ at all costs! But, how can you tell if an object is actually iterable? Apparently the only fully portable solution is to query an object as an iterable and catch the exception that would be raise if it is fact not an iterable. Here is function that returns true if the argument is an iterable:

def isIterable( variable ):
		isIterable = iter( variable )
	except TypeError:
		return False
	return True

There are indeed other ways to perform the same test in Python, such as checking for the {{{__iter__}}} method


However, while this approach may seem cleaner and more obvious, it is not truly compatible with Python 3.x.

* http://bytes.com/topic/python/answers/514838-how-test-if-object-sequence-iterable
# Log in as a regular user.
# Type {{{su}}} to become a //super user//.
# enter the ''root'' password.
# type {{{/usr/sbin/useradd newuser}}} to create the new account (of course, replace {{{newuser}}} with the actual name of the account). 
# type {{{passwd newuser}}} to set a [temporary] password for the new user.
Among the many problems that affect video playback in PowerPoint, one is that it cannot always resolve the path to your videos correctly. Assuming that your video is in the file system, PowerPoint can fail because:
* The video is linked using an absolute path.
* if the absolute path of your presentation is longer than 128 character, PowerPoint cannot play your file. This is a [[documented flaw|http://office.microsoft.com/en-us/powerpoint-help/my-movie-doesn-t-play-HA010077716.aspx]]

''Place your movies in the same folder as your presentation''
* While in principle you can use a relative path to link your videos, PowerPoint uses absolute paths by default, so if you move your presentation around, the video will break. On the other hand, PowerPoint always uses the local folder to search for videos even when an absolute path is specified. Therefore, it always best to keep your videos in the same folder as your ppt files.

''Place your presentation in a path with a shorter length''
In most cases, you don't want to move your files around. So here is a trick to avoid the trouble:
# Share the folder that contains your presentation
# in Windows Explorer, choose {{{Tools->Map Network Drive..}}} (In Vista and Windows 7 you have to press ALT to see the tools menu)
# Use {{{\\\NameOfSharedFolder}}} as the target.
# Now you will see a new drive letter (e.g. Z) in Windows Explorer that maps to the location of your presentation. Click on the new drive letter and open your presentation from there. 

Now, the absolute of your presentation is something like {{{Z:\MyPresentation.ppt}}} which is pretty short!

* [[http://www.pptools.com/fixlinks/index.html]]
* [[http://www.pptfaq.com/FAQ00433.htm]]
If you repeatedly connect to a Unix server through //ssh// and fail to authenticate properly (e.g. use the wrong password), the system is likely to ban your IP address. When that happens, you will get the message {{{Server unexpectedly closed network connection}}} and you can forget about accessing the server anymore. Unfortunately, on most system you won't be able to access the system even if you are a legitimate user and have the root password for the server.

Now, say that you have the root password and you want to //unban// yourself. First of all, you should either log in to the server from a different machine, or simply {{{ssh}}} into another server and then {{{ssh}}} to the original server. Secondly, you have to edit some configuration files to remove yourself from the black list. The files to edit are:
* {{{/etc/hosts.allow}}} 
* {{{/etc/hosts.deny}}} 

* [[http://www.addshit.com/13/SSH:_Connection_refused_-_banned_IP_address,_unban_and_allow_access/]]
* [[http://www.freebsddiary.org/ssh_refused.php]]  
One of the major annoyances of the Mac UI is that it relies too much on the mouse. This shortcoming becomes particularly annoying when you work with multiple screens, since you may need to drag items and windows around your desktop for a longer distance. There is a little commercial application called SizeUp thta provides a fairly interesting solution to this problem. SizeUp defines some global key bindings that allow you to reposition and resize windows on the screen using only the keyboard. Here's the URL:


 Unfortunately, this app is not free!
While in most cases you want Python code to be entirely oblivious of the underlying platform, there are some situations in which it is useful to have Python behave differently depending on the system it is running on. In fact, Python provides several facilities that interact directly with specific aspects of the undelying operating system and its APIs. However, before you can take advantage of any platform specific functionality, you need a cross-platform way to figure out which system you are running on in the first place. 

The easiest way to determine the system on which Python is running is
import os

On windows the value of {{{os.name}}} is {{{nt}}} and on most UNIX-like platforms such as Linux and Mac OS it is {{{posix}}}.

However, this is not the best way to probe the system. The best way to do so is by using the {{{platform}}} module, which is described [[here|http://docs.python.org/library/platform.html]]. Specifically, one of the most useful functions in the {{{platform}}} module is a cross-platform {{{uname}}} interface that provides a great deal of useful information about the underlying platform. Here is one example:

>>> platform.uname()
 'x86 Family 6 Model 23 Stepping 10, GenuineIntel')

If you only want to know the name of the underlying operating system, you can simply use {{{platform.system()}}}, which is the same as {{{platform.uname()[0]}}}.

These are some of the identifiers for the most popular operating systems

//Windows:// {{{Windows}}}
//Mac OS X:// {{{Darwin}}}
//Linux:// {{{Linux}}}
Due to Chrome's restrictive default security settings, The awesome """TiddlyWiki""" requires some tweaks to work correctly in this browser. Here's how:
# ensure that the file {{{TiddlySaver.jar}}} is in the same folder where your """TiddlyWiki""" is located. This file is part of the standard distribution of """TiddlyWiki"""
# run Chrome with the {{{--allow-file-access-from-files}}} command line option
# make sure that you have the latest version of the Java plug-in installed
# point the browser to your """TiddlyWiki"""
# confirm that you trust the security certificate when Chrome asks about it
# Chrome does not allow pages on your filesystem to create cookies, so you won't be able to save  options for """TiddlyWiki""" in the usual way. Instead, do the following to save options permanently without using cookies:
## create a tiddler called {{{SystemSettings}}}
## open the {{{options->AdvancedOptions}}} tiddler a use the {{{name}}} field to set options in the {{{SystemSettings}}} tiddler as you wish. For example your  {{{SystemSettings}}} tiddler may look like [[this|SystemSettings]]
The best way to deal with step (2) is to set the command-line option directly in an application shortcut

* Create a shortcut for Google Chrome either on your desktop or the taskbar
* right-click on the shortcut and select {{{properties}}}
* go to the {{{Shortcut}}} tab
* add the command argument at the end of the {{{Target}}} field

//Mac OS X//
* you need to set the command line option to a dock item using Automator. The steps to accomplish this are described well here: http://superuser.com/questions/271678/how-do-i-pass-command-line-arguments-to-dock-items
In principle Chrome should allow you to save cookies from file """URLs""" using the {{{--enable-file-cookies}}} command line option, but this solution does not seem to work for """TiddlyWiki"""
* http://josefbetancourt.wordpress.com/2010/06/12/saving-a-tiddlywiki-page-in-chrome-browser/
* https://groups.google.com/group/tiddlywiki/browse_thread/thread/4df22c4e1baa9552/3eb4743aeaa55ff9?show_docid=3eb4743aeaa55ff9
* http://code.google.com/p/chromium/issues/detail?id=535
Most of the time, when you try to "safely eject" an external drive in Windows, you'll get a message saying that the drive can't be ejected because it is in use. This is very irritating, considering that it happens even if you never tried to access the drive with any application. One way to discover, which programs are locking the drive is by using the program {{{handle.exe}}} by SysInternals. On the command line type:
handle.exe F:

where you should replace F: with the actual name of the drive that you trying to eject. This command will tell you which programs are locking you drive. Proceed by closing those programs to finally unlock the drive.

* http://www.michaelhinds.com/tech/win/this-device-is-currently-in-use.html
On Windows Google Chrome allows you to create desktop shortcuts of a web applications, such as Google's own Gmail. However, once in a while the icons of the shortcuts may go missing. When you create an application shortcut, Google Chrome downloads the icons of the web application in a special folder. To fix the icon either create the shortcut again, or link the shortcut to the icon again. Here is where to find the icons

''Windows XP''
{{{C:\Documents and Settings\Gabe\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Applications\NAMEOFAPP\}}}

The actual file for the icon is located in a subfolder called either {{{http_80}}} or {{{https_80}}}
When building a //dylib// in XCode, there is an important option in XCode that is easily overlooked and that can lead to a lot of trouble if not set correctly. The option is called {{{Installation Directory}}} which sets a specified path directly inside the binary of the dylib. This path is used by all applications that are linked dynamically against this library.

Let's consider an example:

* Suppose that you build a dynamic library called {{{mylib.dylib}}} and you set the {{{Installation Directory}}} to {{{/usr/local/lib}}}. 
* Now, say that you build an executable called user.app that is linked dynamicaly against {{{mylib.dylib}}}
* You will get a run-time error unless you place {{{mylib.dylib}}} in {{{usr/local/lib}}}

This is unfortunately what will most likely happen if you rely on XCode's defaults

The most portable solution is to:
* Set the  {{{Installation Directory}}} for the dynamic library to "./", which means to search for the library in the current directory
* In  the executable's XCode project, add a //Copy Build Phase// that copies the specified library to the executable's folder

* You can verify the installation directory of a library with {{{otool -L}}}
* If you have problems with a dynamic library you can always use the command line tool {{{install_name_tool}}} to modify the install directory.
Most types of smart pointers provided by Boost are indeed thread safe. However, Boost also provides a special type of light weight smart pointer called {{{intrusive_ptr}}} that is not. The latter is designed for high performance applications that need to control the details of reference counting and avoid the even small performance hit of thread locking.

# http://www.boost.org/doc/libs/1_45_0/libs/smart_ptr/smart_ptr.htm
# http://www.codeproject.com/KB/stl/boostsmartptr.aspx
When you install Visual Studio or other IDEs such as Qt Creator on Windows, they may register themselves as Just-In-Time (JIT) debuggers. An application registered as a JIT debugger is invoked by the operating system whenever an applications crashes and it handed useful debugging information about the state of the faulty application when the crash occurred. 

JIT debuggers are registered through the registry keys:

{{{HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug\Debugger}}}  
{{{HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug\Debugger.Default}}}  

# http://msdn.microsoft.com/en-us/library/5hs4b7a6.aspx
Spotlight search on Mac OS X is very good and fast, but it still has a few little problems and limitations:
* It cannot find files by names
* It often overwhelms you with a huge number of hits
* It does not search protected system files

The solution for all these shortcomings is an excellent application called [[Find Any File|http://apps.tempel.org/FindAnyFile/]]

It searches for files the way UNIX' //find// does and it is surprisingly fast too.
Once in a while, under unusual circumstances, Windows XP may loose the process handle to the command prompt window. When this happens, you can still minimize, maximize, and move the window around with minimal interactivity, but most definitely you cannot close the window. In fact, the corresponding {{{cmd.exe}}} process is not even visible in the //task manager//. In other words, the console window turns into a zombie! Even more powerful administrative programs like [[process explorer|http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx]] can't attach to the lost console anymore.

This very odd problem is often caused by Visual Studio's debugger. If you are running a console application in the debugger and an exception occurs inside a worker thread, then the operating system may literally loose the handle to that thread and thus leave a zombie command prompt behind.

Apparently, this quirk is introduced by security update //KB978037// for Windows XP released on February 2010. Of course, debuggers can serve as powerful hacking tools, so Microsoft decided to target its own Visual Studio Debugger as a threat! Way to go!

In any case removing this problem is not too difficult:
* Go to the Control Panel
* Choose {{{Add or Remove Programs}}}
* Check {{{Show Updates}}}
* Find the entry called {{{{Security Update for Windows XP (KB978037)}}}.
* Click remove.

As usual, it is a good idea to create a Restore Point before doing this, just in case.

* [[http://irrlicht.sourceforge.net/phpBB2/viewtopic.php?p=218355&sid=ec0bd112a2f65764d3c95fba1698531d]]
* [[http://www.codeproject.com/Messages/3368822/Zombie-command-window-after-debugging-VS2008.aspx]]
Mac Mail has always been a somewhat buggy piece of software. The version that ships with Snow Leopards works OK for the most part, but once in a while it has the occasional annoyance. One recurring problem is that Mail may end up dowloading your IMAP inbox multiple times resulting in thousands of duplicate messages. This problem typically occurs, the first time that you set up an IMAP account. Andreas Amann wrote a popular [[suite of Apple scripts|http://homepage.mac.com/aamann/Mail_Scripts.html]] that should address this problem, but they never worked for me. One solution that I found to the problem is to simply rebuild your Mailbox database. It is a slow process, since it forces Mail to download your entire mailbox from scratch, but it seems to resolve most problems. 

Here is how to do rebuild your mailbox.

* Select the mailbox that you want to rebuild (typically your inbox folder)
* Select ''Mailbox -> Rebuild'' from the menu
When you close the lid of a MacBook, the device goes to sleep as expected. One key difference with most other laptops is that the USB ports still deliver power. This is useful, because you can charge you iPod or other USB device, even when the MacBook is sleeping. 

If the MacBook is closed and sleeping with no external monitor attached, it will behave as expected and preserve its sleeping state even when it is disturbed. If, for instance, you have a USB mouse attached to the sleeping MacBook, and you move it, the MacBook will wake up for a few seconds and then go back to sleep immediately after.

If you have instead an external monitor attached to a closed MacBook, then it will start to suffer from insomnia. If the MacBook is closed and sleeping and you perturb it as before by moving an external mouse attached to it, the MacBook will wake up, show the desktop on the second monitor and stay awake until you put it back to sleep again. This happens even if the lid is closed!

This strange behavior is probably by design and not a bug. A user may want to show a video or play some other media file on the external screen without having to keep the MacBook's lid open. It is still very strange!
If you want to discover the dependencies of a Port that you have already installed, you can use the well-documented command {{{deps}}}
Wormhole:~ Gabe$ port deps subversion
Library Dependencies: expat, neon, apr, apr-util, db46, sqlite3, gettext, libiconv, serf,
But, what if you want to assess all the dependencies of Port before installing it? Some Ports have a crazy number of dependencies and might decide that it is not worth installing a particular port, when you assess the number of dependent libraries that you would have to install as well. It is also useful to know this, when you have problems building a particular port. So, here is the command for  //Ethereal// a sample Port with a lot of dependencies:
port echo depof:ethereal

If you want to be thorough and find all dependencies recursively, use
port echo rdepof:ethereal
There are times in which you need to install an existing MacPorts library with a different variant. Say, for instance that you want to build the library {{{openssl}}} with the {{{universal}}} variant, but some of its active dependencies are already build for a 64 bit architecture. In this case using the usual command

sudo port install openssl +universal

will bail if any of the dependencies do not match the requested variant.

You can however enforce variants in MacPort forcing the installation procedure to also rebuild all dependencies for the requested variant:

sudo port upgrade --enforce-variants openssl +universal

Be careful though! If any other library shares dependencies with the one that you are going to install, it going to break.
[[TiddlyWiki Reference]]
[[Copyright Notice]]
If you right-click on a file in the //solution explorer// and open the file property window in Visual Studio, you can set useful yet little known options for you build configuration.

''Excluded From Build''
The file is not built under the current configuration. When this option is enabled, you see a little red sign on the file's icon, if the file is excluded from the current active configuration.

You can select which tool is actually used to build the file. Typically, this option should be set to ''C/C++ Compiler Tool''.
While it is common wisdom that //memcpy// should be faster than the more traditional (and clean) way of copying data using a for loop, there are some people that disagree. Of course the performance of these operations varies greatly depending on the hardware, the OS, and the compiler that is used. I performed a few experiment to get some insight. For all experiments I ran the following code, which allocates 400MB of RAM, sets the storage to some predefined value, and then copies the data using two different methods. Here is the code:

#include <iostream>
#include <memory.h>
#include <boost/timer.hpp>

const unsigned int LARGE_BLOCK_SIZE = 100000000;
typedef unsigned int AllocationType;

int main()

	AllocationType* largeMemoryBlock1;
	AllocationType* largeMemoryBlock2;
	std::size_t memoryBlockSize = sizeof( AllocationType ) * LARGE_BLOCK_SIZE;
	largeMemoryBlock1 = (AllocationType*)malloc( memoryBlockSize );
	largeMemoryBlock2 = (AllocationType*)malloc( memoryBlockSize );
	std::cout << "allocated: " << memoryBlockSize <<

	srand( 10 );
	for( int i = 0; i < LARGE_BLOCK_SIZE; ++i )
		largeMemoryBlock1[ i ] = rand();

	boost::timer timer;
	memcpy( largeMemoryBlock2, largeMemoryBlock1, memoryBlockSize );
	std::cout << "memcpy took: " << timer.elapsed() << " seconds" << std::endl;

	for( int i = 0; i < LARGE_BLOCK_SIZE; ++i )
		largeMemoryBlock2[ i ] = largeMemoryBlock1[ i ];

	std::cout << "iterative copy took: " << timer.elapsed() << " seconds" << 

	delete [] largeMemoryBlock1;
	delete [] largeMemoryBlock2;

}	// main


''Experiment 1''
//CPU: Intel Core 2 Quad 2.6 GHz
Memory: 3GB
OS: Windows Vista 32bit
Compiler: Visual Studio 2009
Build Type: Debug//

//memcpy: 0.234 seconds//
// iterative: 0.374 seconds//

Here memcpy is the clear winner. Increasing the amount of memory, makes the gap between the two even larger. Let's now peek at the assembly code generated by the compiler. Here is the code for //memcpy//:

	memcpy( largeMemoryBlock2, largeMemoryBlock1, memoryBlockSize );
009F15CC  mov         eax,dword ptr [memoryBlockSize] 
009F15CF  push        eax  
009F15D0  mov         ecx,dword ptr [largeMemoryBlock1] 
009F15D3  push        ecx  
009F15D4  mov         edx,dword ptr [largeMemoryBlock2] 
009F15D7  push        edx  
009F15D8  call        @ILT+345(_memcpy) (9F115Eh) 
009F15DD  add         esp,0Ch 

This code reveals that the compiler simply forwards the task of copying memory to the OS by performing a system call. Unfortunately, we can't tell in this case exactly how many instructions this operation amounts to.

Here is the assembly for our for loop:

	for( int i = 0; i < LARGE_BLOCK_SIZE; ++i )
010615BB  mov         dword ptr [i],0 
010615C2  jmp         main+0BDh (10615CDh) 
010615C4  mov         eax,dword ptr [i] 
010615C7  add         eax,1 
010615CA  mov         dword ptr [i],eax 
010615CD  cmp         dword ptr [i],5F5E100h 
010615D4  jae         main+0E0h (10615F0h) 
		largeMemoryBlock1[ i ] = rand();
010615D6  mov         esi,esp 
010615D8  call        dword ptr [__imp__rand (106A434h)] 
010615DE  cmp         esi,esp 
010615E0  call        @ILT+455(__RTC_CheckEsp) (10611CCh) 
010615E5  mov         ecx,dword ptr [i] 
010615E8  mov         edx,dword ptr [largeMemoryBlock1] 
010615EB  mov         dword ptr [edx+ecx*4],eax 
010615EE  jmp         main+0B4h (10615C4h) 

This pretty standard fare. 

''Experiment 2''
//CPU: Intel Core 2 Quad 2.6 GHz
Memory: 3GB
OS: Windows Vista 32bit
Compiler: Visual Studio 2009
Build Type: Release//

//memcpy: 0.312 seconds//
// iterative: 0.218 seconds//

These results are pretty much the same. Notice also that //memcpy// actually takes somewhat longer in Release mode, and this reult is consistent across multiple trials. Let's again look at the disassembled code to understand why.


	memcpy( largeMemoryBlock2, largeMemoryBlock1, memoryBlockSize );
003E1077  push        17D78400h 
003E107C  push        edi  
003E107D  push        ebx  
003E107E  mov         ebp,eax 
003E1080  call        memcpy (3E1C60h) 
003E1085  add         esp,0Ch 


Here the compiler calls a different OS function with a different calling convention. This version requires less instructions than the previous one, suggesting that optimizer has done a good job, but we have actually found that it takes longer. In this case the system call is slower. Here is the iterative solution


	for( int i = 0; i < LARGE_BLOCK_SIZE; ++i )
00CB10ED  mov         ecx,edi 
00CB10EF  mov         dword ptr [esp+3Ch],eax 
00CB10F3  mov         eax,ebx 
00CB10F5  sub         ecx,ebx 
00CB10F7  mov         edx,5F5E100h 
00CB10FC  lea         esp,[esp] 
		largeMemoryBlock2[ i ] = largeMemoryBlock1[ i ];
00CB1100  mov         ebp,dword ptr [ecx+eax] 
00CB1103  mov         dword ptr [eax],ebp 
00CB1105  add         eax,4 
00CB1108  sub         edx,1 
00CB110B  jne         main+100h (0CB1100h) 

Here the optimizer has reduced the code a little bit. 

The iterative solution is effectively faster in ''Release'' mode, which is how programs are ultimately compiled when they are ready. The reason is that, at least on the Windows platform, memcpy triggers a system call, that might end up being slower that a for loop massaged by the optimizer.
Despite what many detractors malign about Microsoft, all engineers that work there are highly smart and a capable professionals. After all, Microsoft's wealth can afford them to attract the best minds in the business. To convince yourself, just take a look at the enviable roster of top-notch computer scientists that work at Microsoft Research. Likewise, the Microsoft Office team is composed of thousands of talented developers. So, why is it that Microsoft Office sucks? In my view, there are a few key reasons:

* Microsoft's strategy of trying to make too many people happy at once.
* Microsoft's aggressive market strategy.
* Short development cycles. 
* Backwards Compatibility.
* The logistics of having way too many developers working on a single product.
* Too many features.
* Highly over-engineered software.

''Microsoft's strategy of trying to make too many people happy at once''

Microsoft Office is one of those applications that tries to make everyone happy, but it generally fails to truly satisfy everyone. In fact, Microsoft Word users range from fifth graders writing reports for school all the way to seasoned software professionals developing advanced Office applications for their employers. It is wicked hard to design an applications that can cope with the often competing demands of a broad variety of users. For instance, entry level users generally have little knowledge on how to operate a computer and feel overwhelmed by most software. They need a highly simplified interface that is very easy to use,  has a minimal learning curve, and that does the right thing even when the user is clueless on how to proceed. This is incidentally one of the reasons why Microsoft Word often infuriates expert users """--it tries to make decisions of its own""". You know, I am referring to that thing when Word suddenly decides to change the layout of your paragraph with no apparent reason. These behaviors often feel like inexplicable bugs to those who know their trade. And often they are bugs indeed! Also, professionals seek a good level of control over the application and they generally don't appreciate disappearing menu items and the kind of irrational formatting rules for bullets and tables that dominated most Office versions up to their 2003 release. 

The problem is, however, that for the most part there is little overlap between the requirements of these different groups of users. Adobe understood this several years ago and decided to differentiate their product line to reach out to a larger user base without compromising the integrity of their software. This led to the pairing of Photoshop Elements and Premiere Elements to their more venerable siblings to fulfill the needs of novice users (and those who are not willing to spend 600 dollars for a piece of high-end software).

''Microsoft's aggressive market strategy''

Another problem is that Microsoft always tries to outdo the competition by releasing their products first on the market. And they do this often at the expense of software quality. Microsoft's business model is generally to gather many software partners early on by luring them with great promises, but when the promises are too ambitious they inevitably fail to deliver. They did this for most of the nineties and they did it again with "big bomb" Vista. 
Short development cycle''

Doing too much too soon also results in relatively short development cycles compared to the amount of features that they target for release. 

''Backwards Compatibility''

Backwards compatibility has always been Microsoft's Achille's heel. They had to keep up with poor design decisions they made early on in order to preserve compatibility of newer products with their buggy predecessors. Yet when they try to remedy their mistakes they either end up with something worse, or they meet the rage of their users. 
I was trying to understand the file dependencies that lead to NVIDIA's implementation of OpenGL in Windows Vista. After some hacking I found several interesting things:
* The main file of interest is //C:\Windows\System32\nvoglv32.dll//. This file is the main ''ICD (Installable Client Driver)'' for OpenGL provided by NVIDIA
* Doing a search for //dll// in the binary of opengl32.dll reveals another interesting fact. The dll actually refers to //ddraw.dll// and //gdi32.dll//. So it seems that when you use OpenGL in Windows you are actually relying of some functionality of DirectX and GDI.

I remember having read long ago that Microsoft may drop direct support for OpenGL in Windows Vista and that ICD's would create a level of indirection by routing OpenGL calls to DirectX. A horrible thing if it were true! However, Microsoft later promised that this would not happen and that vendors would get direct kernel access to provide hardware acceleration through the ICD. Is this a lie? My findings were suggesting that...

Not to worry! This [[page|http://msdn.microsoft.com/en-us/library/ms797549.aspx]] verifies that ICD's have indeed low level access and the details fully agree with what I found.
Once in a while it is useful to write little console programs that run an infinite loop waiting for some event to occur. Unfortunately, there is no portable way of performing even such a simple operation. In this tiddler I show how to do this on various operating systems

In Windows you have to use the {{{_kbhit}}} and {{{_getch}}} functions in conjunction as shown here:
	int ch;
	ch = ' ';

		// process events

		if( _kbhit() )
			ch = _getch();
			ch = toupper( ch );

               // sleep a little so this process won't take over the system
		Sleep( 5 );

	}while( ch != 'Q' );

Note that the underscores in  {{{_kbhit}}} and {{{_getch}}} signal that these functions are thread safe. These functions are Microsoft specific!

* [http://msdn.microsoft.com/en-us/library/58w7c94c(v=vs.80).aspx]
I found this very interesting [[article|http://en.wikipedia.org/wiki/Video_game_crash_of_1983]] that reveals many facts about the history of games I did not know about. Two things I discovered are:
* Why game consoles were not very popular in the mid 80s in Europe
* Why consoles even today have such restrictive protections

Here are two good snippets:

//The significantly lower price of computer games (some of which cost just 1% of the price of a computer, due to being stored on inexpensive cassette tapes rather than the plastic cartridges of consoles) strengthened this domination and helped quickly create a mass computer games market. By the time of the 1983 North American console crash, the European video games industry was mostly computer-based and most games were made by European publishers. This allowed the European market to continue to thrive despite the crashing American console market.//

//Using secrecy to combat industrial espionage had failed to stop rival companies from reverse engineering the Mattel and Atari systems and hiring away their trained game programmers. Nintendo, and all the manufacturers who followed, controlled game distribution by implementing licensing restrictions and a security lockout system.//
One typically tedious aspect of writing a paper in Latex is to compile a proper bibliography in BibTeX. Typing bibliographic entries by hand is never fun and never easy, since you can't find all the necessary information directly in the paper and you need to rely on external sources to find that information. Google is generally the best tool the job, but you generally need to do some digging to actually find what you are looking for. Thankfully, it turns out that Google can actually give you a full BibTeX entry! Go to [[Google Scholar|scholar.google.com]] and click on ''Scholar Pereferences''. At the bottom of the page, under ''Bibliography Manager'' select //Show links to import citations into BibTeX//. Next time you search for a paper in Google Scholar, you'll find a link below each hit that will bring you to the corresponding BibTex entry.

The BibTeX entries are only available when you search a paper in Google Scholar and you won't find them in a general Google Search.
Often when you load and convert an old project to work under Visual Studio 2010 you get the error: {{{MSB4006 circular dependency error}}} on your build. This typically happens for solutions that have multiple interdependent projects. This is a known problem and a workaround for it is:
# Go to {{{Project Property Pages->Common Properties->Framework and References}}}.
# Click on the {{{Remove References}}} button.

* http://connect.microsoft.com/VisualStudio/feedback/details/522854/project-converted-from-vs2005-gets-msb4006-circular-dependency-error-on-build#
On March 23rd 2009, Steve Perlman """--a seasoned technology expert and entrepreneur--""" unveiled a new service called OnLive. According to [[this article|http://venturebeat.com/2009/03/23/steve-perlmans-onlive-could-turn-the-video-game-world-upside-down/]] the service has the potential to revolutionize the gaming industry. OnLive provides a streaming technology that will let gamers play advanced games on low-end internet machines, while all computations are done remotely on a massive computer cluster. The idea of OnLive is certainly not new. In fact, many have tried to do something similar starting almost 20 years ago and failed miserably. Most recently Infinium Labs came up with a product called [[Phantom|http://en.wikipedia.org/wiki/The_Phantom_(game_system)]], but the outcome was a total disaster. OnLive instead shows great potential and it seems to be positioned better than its less fortunate predecessors both in terms of timing and technical capability. Yet it remains to be seen if it lives up to expectations, when it comes out next winter.

The technology per se is not revolutionary. [[Protocols|http://en.wikipedia.org/wiki/Remote_Desktop_Protocol]] for operating a desktop remotely have been around for many years, and remote clients that use these protocols are available for free on all major platforms. Actually it is not even hard to setup a system so you can run a game on a remote machine. After all, what OnLive does is not too different from an HD video streaming service like Vimeo or NetFlix. The only real difference is that you have to send a small amount of additional information to account for user input, which is not a big deal.

Perlman claims that the real deal about OnLive is their compression technology. He even goes as far as saying that they can achieve 200-fold compression in some cases. It sounds amazing indeed, but it is not impossible. They won't tell us how they do it, but I can make an educated guess. When you do regular video compression using, say H.264, you have to guess the structure of the data using transform methods, which are typically very expensive computationally. One problem with transform methods is that they can't always predict well which parts of the data are more salient than others, thus you cannot filter out information too aggressively if you want to preserve moderate video quality. Here instead, you know exactly the geometry of the scene and you can query this information directly from the video card. Therefore, you know exactly which objects are moving, and which ones are not, you know which textures are used and where, and you have also depth information that can be used to exploit LODs (Levels Of Detail) in your compression scheme.

Perlman himself dropped some additional cues about his compression technology at his GDC press conference. He contrasted OnLive's technology against regular video streams, which are composed of a linear sequence of frames, by noting that user interaction can give you valuable information as to what data to keep and what data to throw away. I bet they did a massive amount of R&D to relate user events to the scene geometry and to the final video output to figure out which information can be dropped to save bandwidth. After all, he is the guy behind Quicktime and we can trust that he knows what he is doing.

Let's do a few calculations. They say that you need a 5 Mbit connection to enjoy a game at 720p resolution, so we get:

stillSize = 1280*720*24 = 22118400 bits
framesPerSecond = 24 1 / sec
bytesPerSecond = stillSize * framesPerSecond = 530841600 bits / sec
connectionRate = 5* 2^20 bits
requiredCompressionRatio = bytesPerSecond / connectionRate ~= 101 

Thus, if you want to run your game at 24 fps and you have a 5 Mbit broadband connection then the average compression ratio should be 101 times! This is without considering the additional bandwidth required for audio and control input. So, if the have a compression technology that has a peak compression ratio of 200 times and it can provide in average a compression ratio that is half that, then we are in business.

The fact that Nvidia is one the main collaborators on this projects is another clue that geometry may actually play an important role in their streaming technology.

Besides the challenge of compression, which is an obvious one, there are other major challenges that OnLive needs to overcome to be successful commercially:

How are they going to cope with the phenomenal amount of bandwidth that is required to service so many data streams? Keep in mind that even YouTube is beginning to feel the pinch on its network. Perlman suggested that the OnLive servers control how network packets are constructed at a very low level and that they worked out all the kinks with major ISPs. Still this is a very challenging proposition.

How are they going to cope with the phenomenal amount of computational power that is required to service a large number of instances of high-end games? This is perhaps the biggest obstacle that they must overcome to make OnLive viable. And, to be honest, I can't see how are they going to do it. Take a game like Crysis, for instance, which is one the games they showed at their press conference. This is a game that can draw even a high-end machine with quad-SLI to its knees. Now, say that a thousand users would like to play this game, then how are they going to service such demand? Are they going to set up 1000 high-end servers, so that every gamers gets to run the game on a single machine. Are they going to distribute the load on a large cluster of special servers? Perlman only hinted at the fact they designed special hardware to cope with this problem.  

If they had to install the equivalent of a high-end gaming PC for every running game it is fairly clear that the issue of scalability will quickly choke the service. However, it is possible that the may be relying on a more advanced infrastructure for cloud computing similar to AMD's [[Fusion Render Cloud|http://blogs.amd.com/unprocessed/tag/fusion-render-cloud/]]. In fact, a similar service called [[OTOY|http://www.crunchbase.com/company/otoy]] is known to rely on AMD's solution.

Still, even if they were to use a computing cloud I am not entirely convinced that it would work. Unlike what Perlman says, this is not like Folding@Home that can use a fairly traditional paradigm for distributed computing. Load balancing is at the core of every large scale distributed system and in general you can afford to have some individual tasks run a little slower on some machines to prevent choking the system. Here the problem is that you don't have so much leeway for games, since all game streams should be processed without latency to run smoothly. And, of course, gamers will not tolerate playing their games below a certain frame rate! Also, most games are extremely demanding and it is not easy to parallelize the execution of a piece of software as complex as a game beyond what developers have already done to exploit multi-core architectures. In other words, the assumption of breaking a difficult task into small computational packets doesn't work well here.

Then the problem of scalability is twofold here. On the one hand, the number of users will increase steadily if the service is successful. On the other hand, the computational demands of games increase at a steady pace. Will they be able to add servers fast enough to keep up with demands? Perlman said that they can address this issue by //simply// adding new servers to their cluster, but I don't think it is so simple. 

The technological implications of OnLive are of course very deep and they may change how developers make their games. According to [[GameSpot|http://www.gamespot.com/features/6206623/index.html]] any PC game can run on the OnLive platform. Now, today's games are designed with clear computational constraints in mind, but OnLive promises an amount of computational power that not even the best studios can afford for their development machines. Moreover, PC games rarely take advantage of the latest and greatest technologies, and they are generally designed to work well with hardware one or two generations behind, because very few gamers have the latest high-end systems. With OnLive developers could actually make games on steroids that use the most sophisticated algorithms without compromises. That sounds wonderful in terms of creative possibilities, but again if developers start abusing all this power, OnLive may have serious problems in the near future.

While Perlman claims that OnLive will be able to run any standard PC game, these technical challenge suggest that developers must make a proactive effort to support this platform at some point. Although OnLive may run existing PC games just fine, it seems that some studios will have to work closely with OnLive to prevent the computational load to become unmanagable. In other words, making a game that runs well on a processor with four or eight cores is not the same thing as game that is supposed to run on a large scale computing cloud.

Now, who is the real target audience of this service? Established hard-core gamers? Casual gamers? One of the themes of current-gen development is to target a larger audience than before, and to possibly recruit new core gamers. Some companies, such as Sony, bet on raw computational power and they lost. In contrast, the success of the Wii and the DS, showed that non-gamers as well as many gaming enthusists prefer originality. Will OnLive's promise of infinite computational power help them win the hearts of new gamers? 

In my opinion there is also a problem with their business model. They assume that network bandwidth is cheaper than processing power. Is that true? While a decent number of people have a broadband connection in the US, very few users enjoy a connection that peaks at 10 Mbit/s, which is needed to play games at a 1080p resolution with OnLive. The issue is that it is actaully cheaper (in the long run) and simpler to buy a console or even a mid-range PC that can play most games at a 720p or higher resolution. So, why bother? Then, many gamers actually enjoy being able to play games without an Internet connection. As a proof, a few years ago there was a backlash among gamers, when Valve hinted at the possibility that you must be connected in order to play Half-Life 2. Of course, OnLive offers a few cool features that are specific to their service, such as spectating other people's games, but I don't think that is enough to convince most people to get a subscription. 

Did I say subscription? A few services like XBOX Live! did actually manage to get a good number of paying subcribers for their service, but most people may be turned down by the idea of having to pay a monthly fee in addition the the price of the games they purchase"""--especially casual gamers and non-gamers.""" 

Obviously OnLive must overcome many challenges to succeed and for now the response of most gamers is a mixed feeling of excitement and skepticism. It is therefore very important that their service works as well as they promised when it comes out to keep momentum and convice gamers to subscribe to their service.

Despite all the potential problems, publishers are going to //love// OnLive!

In a sense, OnLive is a special form of digital delivery. I discussed before that there is form [[cultural resistance|Accepting Digital Delivery]] that is holding back this distribution model. OnLive of course is going to accelerate the push toward digital delivery, if it succeeds. However, here the //cultural// problem is even deeper. With OnLive, game data is not even stored on your machine. What happens to all the games you purchased, if they decide to pull the plug?

If OnLive really succeeds, it is also going to have a pleasant side effect """--it will revitalize PC gaming!"""

Who are the losers here? It is hard to tell, but I am sure that hardware manufacturers, such as Intel or Nvidia are not going to be happy.

Despite all the potential problems I truly hope that OnLive succeeds! OnLive can change the gaming landscape for good and turn the industry upside-down. In fact, most likely the change is going to favor everyone. However, I don't think that OnLive is going to disrupt the current business model completely. Many gamers will still want to play games in their living room without being connected to the internet. PC gaming is going to get stronger. Some people simply don't have a fast enough connection to enjoy this service. And, what about those fun LAN parties with your friends? In my opinion, OnLive is going to emerge as a third player in this industry that will coexists next to consoles and PCs.
When you install the professional edition of Visual Studio you always get the standard OpenGL headers and libraries as part of the C++ distribution. However, I noticed that the location of those files has changed over the years. Here I detail the install location for the OpenGL libraries on the last few versions of Visual Studio

''Visual Studio 2005''
The OpenGL files are part of the Platform SDK that ships with Visual Studio. If you are using the Express edition, you'll have to install the Platform SDK by hand.

{{{C:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\Include\gl}}}

//static libraries//
{{{C:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\Lib}}}

//dynamic libraries//

''Visual Studio 2008''
The location of the OpenGL files has been moved outside the actual Visual Studio folder in {{{program files}}}, which is very confusing.

{{{C:\Program Files\Microsoft SDKs\Windows\v6.0A\Include\gl}}}

//static libraries//
{{{C:\Program Files\Microsoft SDKs\Windows\v6.0A\Lib}}}

//dynamic libraries//

''Visual Studio 2010''
The location of the OpenGL files is similar to the previous version, but this time you must use version 7 of the SDK.

{{{C:\Program Files\Microsoft SDKs\Windows\v7.0A\Include\gl}}}

//static libraries//
{{{C:\Program Files\Microsoft SDKs\Windows\v7.0A\Lib}}}

//dynamic libraries//
While it is an established convention in C++ to separate the declaration of a class from its definition into header and implementation files, this approach falls apart when implementing a templatized class. In fact, using separate header and implementation files typically reduces the incidence of linker errors for standard code, but it //creates// linker errors when used for C++ templates. The best approach for templatized classes instead is to inline all the implementation in a single header file. It is also an established convention to name a single header implementation of class with the extension //hpp//

This [[article|http://www.codeproject.com/KB/cpp/templatesourceorg.aspx]] explains clearly why this is the case
Yesterday when I tried to open Outlook 2007 I was welcomed by the a very unwelcoming message. Outlook essentially failed to start and the only lead to its failure was a rather generic message box that read: "cannot start microsoft office outlook. cannot open the outlook window". I did not found any error or warning related to Microsoft Office in Vista's event log, but I do remember seeing a brief error message raised by Outlook when I tried to shut down my machine the day before. It was a very frustrating problem indeed and it took a lot trial and error with Google's help to find a proper fix. 


Here are a few things that I tried before figuring out what the problem was:
* I run Start Menu/Programs/Microsoft Office/Microsoft Office Tools/Microsoft Office Diagnostics. This tool run for about 15 minutes, fixed some errors, and completed successfully. Yet Outlook was still broken.
* I run C:/Program Files/Microsoft Office/Office12/SCANPST.EXE on my pst file located in the folder specified in Control Panel/Mail. The tool again found some errors in the file and performed a repair, but Outlook still failed to start.
* I tried to start Outlook in "safe" mode using the command line: "Outlook.exe /safe". Outlook again failed to start.
* I tried to run C:/Program Files/Microsoft Office/Office12/SCANOST.EXE following the guidelines on [[this|http://mark.santaniello.com/archives/417]] page, but the tool failed. I later discovered that this tool is used to repair connection with an Exchange server, which I obviously don't have at home.
* I followed [[these|http://support.microsoft.com/kb/913843]] guidelines by adding a registry key. It turns out that this fix corrects a problem that occurs when Outlook tries to connect to an Exchange server and again that is not related to my problem. 
* I tried to repair my account settings from Control Panel/Mail/E-mail Accounts. Still nothing.

''The Fix''
After more than an hour of annoyance I finally found a fix. It turns out that the problem was caused by my active profile getting corrupted and unfortunately it seems that there is no "elegant" solution to this problem. Here are the steps:

# In Control Panel/Mail/Show Profiles I added a new profile.
# I entered the settings of my mail server (by copying them from another machine)
# In the same dialog box I checked: "prompt for a file to be used"
# I Run Outlook and finally it managed to get started.
# Verified my mail settings.
# Imported my old pst file in the new profile.
# Downloaded all my all emails again from the server (over 450 MB!). I don't this this step can be avoided.

Unfortunately after Outlook downloaded my messages again, it created a lot of duplicates. Apparently //I should have imported my old pst file after downloading the messages//, so that I could instruct Outlook to ignore duplicates. In any case, this is what I have done to remove the duplicates:
# make sure to mark as read all the recent messages that are not duplicates! This is going to simplify this process a substantially.
# Expand the inbox view, so that multiple columns become visible.
# Right click on the column labels at the top and choose //Field Chooser//.
# Select //All Messages// in the drop down menu and then click on the //modified// label.
# Now sort messages by the modified field. All duplicate message that were just downloaded should appear with today's date and be highlighted as unread. 
# Select all duplicated messages by the above criterion and delete them,

Now you can go back to Control Panel/Mail/Show Profiles and set Outlook to always use the new profile.

Phew! It is a real shame that we should go through so much trouble to get Outlook 2007 to work again after a failure. And I must say that many of the steps that I had to follow are really ugly and inelegant.
PIL is an great image manipulation library for Python, but its documentation is not necessarily that great. For instance, most PIL functions require a //mode// argument, but there is no mention of the available formats in the online documentation. However, you can figure out the supported formats by yourself by one PIL headers reproduced below:

static struct {
    const char* mode;
    const char* rawmode;
    int bits;
    ImagingShuffler unpack;
} unpackers[] = {

    /* bilevel */
    {"1",	"1",		1,	unpack1},
    {"1",	"1;I",		1,	unpack1I},
    {"1",	"1;R",		1,	unpack1R},
    {"1",	"1;IR",		1,	unpack1IR},

    /* greyscale */
    {"L",	"L;2",  	2,	unpackL2},
    {"L",	"L;4",  	4,	unpackL4},
    {"L",	"L",   		8,	copy1},
    {"L",	"L;I",   	8,	unpackLI},
    {"L",	"L;16",  	16,	unpackL16},
    {"L",	"L;16B",  	16,	unpackL16B},

    /* palette */
    {"P",	"P;1",   	1,	unpackP1},
    {"P",	"P;2",   	2,	unpackP2},
    {"P",	"P;2L",   	2,	unpackP2L},
    {"P",	"P;4",   	4,	unpackP4},
    {"P",	"P;4L",   	4,	unpackP4L},
    {"P",	"P",		8,	copy1},

    /* true colour */
    {"RGB",	"RGB",		24,	ImagingUnpackRGB},
    {"RGB",	"RGB;L",	24,	unpackRGBL},
    {"RGB",	"RGB;16B",	48,	unpackRGB16B},
    {"RGB",	"BGR",		24,	ImagingUnpackBGR},
    {"RGB",	"BGR;15",	16,	ImagingUnpackBGR15},
    {"RGB",	"BGR;16",	16,	ImagingUnpackBGR16},
    {"RGB",	"BGR;5",	16,	ImagingUnpackBGR15}, /* compat */
    {"RGB",	"RGBX",		32,	copy4},
    {"RGB",	"RGBX;L",	32,	unpackRGBAL},
    {"RGB",	"BGRX",		32,	ImagingUnpackBGRX},
    {"RGB",	"XRGB",		24,	ImagingUnpackXRGB},
    {"RGB",	"XBGR",		32,	ImagingUnpackXBGR},
    {"RGB",	"YCC;P",	24,	ImagingUnpackYCC},
    {"RGB",	"R",   		8,	band0},
    {"RGB",	"G",   		8,	band1},
    {"RGB",	"B",   		8,	band2},

    /* true colour w. transparency */
    {"RGBA",	"LA",		16,	unpackLA},
    {"RGBA",	"LA;16B",	32,	unpackLA16B},
    {"RGBA",	"RGBA",		32,	copy4},
    {"RGBA",	"RGBA;I",	32,	unpackRGBAI},
    {"RGBA",	"RGBA;L",	32,	unpackRGBAL},
    {"RGBA",	"RGBA;16B",	64,	unpackRGBA16B},
    {"RGBA",	"BGRA",		32,	unpackBGRA},
    {"RGBA",	"ARGB",		32,	unpackARGB},
    {"RGBA",	"ABGR",		32,	unpackABGR},
    {"RGBA",	"YCCA;P",	32,	ImagingUnpackYCCA},
    {"RGBA",	"R",   		8,	band0},
    {"RGBA",	"G",   		8,	band1},
    {"RGBA",	"B",   		8,	band2},
    {"RGBA",	"A",   		8,	band3},

    /* true colour w. padding */
    {"RGBX",	"RGB",		24,	ImagingUnpackRGB},
    {"RGBX",	"RGB;L",	24,	unpackRGBL},
    {"RGBX",	"RGB;16B",	48,	unpackRGB16B},
    {"RGBX",	"BGR",		24,	ImagingUnpackBGR},
    {"RGBX",	"BGR;15",	16,	ImagingUnpackBGR15},
    {"RGB",	"BGR;16",	16,	ImagingUnpackBGR16},
    {"RGBX",	"BGR;5",	16,	ImagingUnpackBGR15}, /* compat */
    {"RGBX",	"RGBX",		32,	copy4},
    {"RGBX",	"RGBX;L",	32,	unpackRGBAL},
    {"RGBX",	"BGRX",		32,	ImagingUnpackBGRX},
    {"RGBX",	"XRGB",		24,	ImagingUnpackXRGB},
    {"RGBX",	"XBGR",		32,	ImagingUnpackXBGR},
    {"RGBX",	"YCC;P",	24,	ImagingUnpackYCC},
    {"RGBX",	"R",   		8,	band0},
    {"RGBX",	"G",   		8,	band1},
    {"RGBX",	"B",   		8,	band2},
    {"RGBX",	"X",   		8,	band3},

    /* colour separation */
    {"CMYK",	"CMYK",		32,	copy4},
    {"CMYK",	"CMYK;I",	32,	unpackCMYKI},
    {"CMYK",	"CMYK;L",	32,	unpackRGBAL},
    {"CMYK",	"C",   		8,	band0},
    {"CMYK",	"M",   		8,	band1},
    {"CMYK",	"Y",   		8,	band2},
    {"CMYK",	"K",   		8,	band3},
    {"CMYK",	"C;I",   	8,	band0I},
    {"CMYK",	"M;I",   	8,	band1I},
    {"CMYK",	"Y;I",   	8,	band2I},
    {"CMYK",	"K;I",   	8,	band3I},

    /* video (YCbCr) */
    {"YCbCr",	"YCbCr",	24,	ImagingUnpackRGB},
    {"YCbCr",	"YCbCr;L",	24,	unpackRGBL},
    {"YCbCr",	"YCbCrX",	32,	copy4},
    {"YCbCr",	"YCbCrK",	32,	copy4},

    /* integer variations */
    {"I",	"I",		32,	copy4},
    {"I",	"I;8",		8,	unpackI8},
    {"I",	"I;8S",		8,	unpackI8S},
    {"I",	"I;16",		16,	unpackI16},
    {"I",	"I;16S",	16,	unpackI16S},
    {"I",	"I;16B",	16,	unpackI16B},
    {"I",	"I;16BS",	16,	unpackI16BS},
    {"I",	"I;16N",	16,	unpackI16N},
    {"I",	"I;16NS",	16,	unpackI16NS},
    {"I",	"I;32",		32,	unpackI32},
    {"I",	"I;32S",	32,	unpackI32S},
    {"I",	"I;32B",	32,	unpackI32B},
    {"I",	"I;32BS",	32,	unpackI32BS},
    {"I",	"I;32N",	32,	unpackI32N},
    {"I",	"I;32NS",	32,	unpackI32NS},

    /* floating point variations */
    {"F",	"F",		32,	copy4},
    {"F",	"F;8",		8,	unpackF8},
    {"F",	"F;8S",		8,	unpackF8S},
    {"F",	"F;16",		16,	unpackF16},
    {"F",	"F;16S",	16,	unpackF16S},
    {"F",	"F;16B",	16,	unpackF16B},
    {"F",	"F;16BS",	16,	unpackF16BS},
    {"F",	"F;16N",	16,	unpackF16N},
    {"F",	"F;16NS",	16,	unpackF16NS},
    {"F",	"F;32",		32,	unpackF32},
    {"F",	"F;32S",	32,	unpackF32S},
    {"F",	"F;32B",	32,	unpackF32B},
    {"F",	"F;32BS",	32,	unpackF32BS},
    {"F",	"F;32N",	32,	unpackF32N},
    {"F",	"F;32NS",	32,	unpackF32NS},
    {"F",	"F;32F",	32,	unpackF32F},
    {"F",	"F;32BF",	32,	unpackF32BF},
    {"F",	"F;32NF",	32,	unpackF32NF},
#ifdef FLOAT64
    {"F",	"F;64F",	64,	unpackF64F},
    {"F",	"F;64BF",	64,	unpackF64BF},
    {"F",	"F;64NF",	64,	unpackF64NF},

    /* storage modes */
    {"I;16",	"I;16",		16,	copy2},
    {"I;16B",	"I;16B",	16,	copy2},

    {NULL} /* sentinel */
The environment variable PYTHONPATH is used to define the search paths for Python's {{{import}}} command. You use this environment variable if you placed some of your import modules in some non-standard directory and you want to be able to import them globally from the python interpreter. Another very useful application of this environment variable is when you want to run a python script that needs to find a module somewhere other that the current directory. 

There are, however, a few quirks if you want to do this in Windows:
* even if your path contains spaces, you must not use double quotes
* you path delimiters must be forward slashes
* if you have multiple paths, you must separate them with a semicolon


Consider the following Windows //batch file//.


SET PYTHONPATH=SET PYTHONPATH=C:/Documents and Settings/IronMan/MKI;C:/Documents and Settings/IronMan/MKV
python initiateSuit.py

[[PaperVision3D|http://code.google.com/p/papervision3d/]] is an open source real-time 3D engine for Flash. It actually looks pretty good: 
Nintendo is about to release a very interesting downloadable game for the DSi called Looksley's Line Up. It uses face tracking to estimate the motion of the console with respect to the viewer and simulate parallax in game (fake 3D). Here is a video of this technology in action: [[Looksley's Line Up|http://www.youtube.com/watch?v=lEMkgVnzvdE&feature=player_embedded]]. The trick is similar to the one introduced to the masses by Johnny Chung Lee several years ago using Wiimotes for head tracking: [[Head Tracking|http://www.youtube.com/watch?v=Jd3-eiid-Uw]]. Frankly, the game seems to be very boring and gimmicky, and the technology is actually not going to work great due to poor face tracking, but I thought it is an awesome idea.

Here is an attempt to create a similar effect in Flash by streaming from a webcam: [[Gerbster|http://www.gerbster.nl/2009/12/face-tracking-parallax-in-flash/]]. This experiment does not work, but it is still very remarkable that some developers actually tried to port some OpenCV functionality to Flash.
In Windows as well as any other operating system, an executable binary file can embed resources directly in the file itself. Te most common type of resources are icons, forms, and string tables. There are a couple of really nifty applications for Windows that allow you to peek and ''modify'' all the resources embedded in a binary file. They are:
* [[Resource Hacker|http://download.cnet.com/Resource-Hacker/3000-2352_4-10178587.html]]
* [[ Resource Editor|http://www.wilsonc.demon.co.uk/d10resourceeditor.htm]]
Apple Keynote, unlike PowerPoint,  always embeds all the resources (images, videos, etc.) used in the presentation in the files you save. This makes Keynote files typically much larger than PowerPoint ones, but has the considerable advantage that you never end up with a misplaced asset when you present your slides to a big audience. All PowerPoint users know how painful this is with video files. 

Now, one useful question is how to access the resources embedded in a Keynote file.   

''iWorks '08''
In iWorks '08 it is actually pretty easy. A presentation saved in Keynote '08 is actually a bundle, that is a folder on the file system, and you can access its contents either from the terminal or by choosing {{{show package contents}}} in the context menu of the Finder.

''iWorks '09''
File saved in Keynote '09 are actually NOT bundles, so the trick above does not work. You can still peek into them by:
# If you actually have Keynote '09, you can open the presentation in the application and save it as iWorks '08 file and use the trick above
# Interestingly, Keynote '09 files are simply zipped folders, so you can look into them by treating them as zip files.
## rename your file with the {{{zip}}} extension
## double click on the renamed file in the Finder to extract its contents into a folder which now contains all the resources used by the Keynote presentation

The iWorks '09 trick was suggested by [[Jonathan Chambers|http://teachers.saschina.org/jchambers/2010/06/01/how-do-i-extract-movie-files-from-a-keynote/]]
The idea of making a game based on mind-bending realities like the ones envisioned by Escher is very intriguing. However, it is also a major challenge for game designers. Perspective defying puzzles, gravity shifting tricks, and a reckless disregard for the laws of physics, while interesting, are the perfect recipe for a messy and frustrating game unless they are used with judgment and restraint. Yet when placed in the clever hands of an expert designer, these ingredients can be compelling building blocks for a great and original game. There have been several games in the past that based their core mechanics on perspective defying transformations, but a recent game presented at the Independent  Games Festival called [[Fez|http://polytroncorporation.com/?page_id=61]] implements these ideas in a very clean and effective manner. 
Among the many great things of the Python language, one is the {{{pickle}}} module, which provides built-in support for object serialization. However, if you want to load a //pickled// object on multiple platforms, you may run into problems. The most common way to serialize an object out into a file is as follows:

import pickle
myData = { 'a':1, 'b':2, 'c':3 }
outFile = open( 'pickleTest.data', 'w' )
pickle.dump( myData, outFile )

Loading is also very easy:

import pickle
inFile = open( 'pickleTest.data', 'r' )
myData = pickle.load( inFile )
print myData
Now, If you were to write the file on Windows and open it on a Unix platform, such as Linux, then you will get the unpleasant and cryptic error {{{ImportError: No module named copy_reg pickle}}}. Why is that? It is the usual problem with line endings between Windows and other platforms. In fact, the code above will instruct pickle to serialize your object in ASCII format, which spells trouble. There are two solutions to this problem

''Write the file in binary''
You can simply avoid this problem by opening the file in binary mode:

import pickle
myData = { 'a':1, 'b':2, 'c':3 }
outFile = open( 'pickleTest.data', 'wb' )
pickle.dump( myData, outFile )
''Convert the line endings of the pickled file to Unix format''
Simply convert the line endings of the file:
# Use {{{dos2unix}}} in a Linux and other Unix environments
# Use TextWrangler on Mac OS X and save the file with Unix encoding

* http://stackoverflow.com/questions/556269/importerror-no-module-named-copy-reg-pickle
When writing a fragment shader in GLSL, you typically operate directly on the interpolated fragment without worrying much about its actual position in screen space. There are, however, some instances in which you would like to draw pixels at specific positions and thus need to query the exact coordinates of the fragment in screen space. This information is given to you by the built-in {{{vec4 gl_FragCoord}}} which provides the //x,y,z,1/w// homogeneous coordinates of the fragment in screen space. There is a catch though! The exact location of pixel  P="""{"""{{{pixelX, pixelY}}}"""}""" is actually """{"""{{{pixelX + 0.5, pixelY + 0.5}}}"""}""" in fragment coordinates. In other words if you think of your screen as a grid of pixels the actual pixel location is located at the very center of each grid cell, where the bottom left coordinates of pixel P are given by """{"""{{{pixelX, pixelY}}}"""}""" and the top right coordinates are given by """{"""{{{pixelX + 1.0, pixelY + 1.0}}}"""}""". Therefore, in this coordinate system the bottom left corner of your screen is given by fragment coordinates """{"""{{{0.5, 0.5}}}"""}""" and the top right corner is given by """{"""{{{screenWidth - 0.5, screenHeight - 0.5}}}"""}""", where {{{screenWidth}}} and {{{screenHeight}}} are the width and height of the screen. Why the minus signs? Remember that the indexing of the screen pixels goes from """{"""{{{0,0}}}"""}""" to """{"""{{{screenWidth - 1, screenHeight - 1}}}"""}""".

* The coordinates reported by {{{gl_FragCoord}}} depend solely on the screen coordinates of your OpenGL context and they are completely unrelated to the ModelView matrix and the Viewport settings.
The semantics of pointer constness in C++ is more subtle that it seems and often used incorrectly. There are three ways to apply constness to a pointer:
* {{{Object const* p}}} means that {{{p}}} points to a constant {{{Object}}} and the object can't be changed. In most cases this is what you want!
* {{{Object* const p}}} or {{{const Object* p}}} means that p is a const pointer to the {{{Object}}}, so you can't change the pointer {{{p}}}, but you can change the object. This is rarely useful.
* {{{Object const* const}}} means that {{{p}}} is a constant pointer to a constant {{{Object}}}, so you can't change anything.

# [http://www.parashift.com/c++-faq-lite/const-correctness.html#faq-18.5]
In Windows Vista and Windows 7, drag and drop does not seem to work for some programs. The culprit is User Acount Control (UAC), one of the controversial and annoying security features introduced with the Vista kernel. With UAC enabled applications that are running at different privilege levels are not allowed to communicate with each other. Therefore, If you need to run an application with administrative privileges, this application will not accept any drops from other applications that run without administrative privileges. This seems like a reasonable security precaution, if it weren't for all the problems that it causes. 

Here's the catch. One common application that does ''never'' run with administrative privileges is Windows Explorer, which is also the most likely source of items for drag and drop. You may even try to to explicitly run Windows Explorer as an administrator, but you would be fooled to think that Windows will do what you think. Windows Explorer will never run with administrative privileges from a regular user account! End of story. Actually, the special administrator account allows you to do the latter, but you may not want to bother with it.

So, how can you work around this problem to enable drag and drop? You either find a way to run the application that should accept drops at a lower privilege level or you [[disable UAC entirely||http://www.petri.co.il/disable_uac_in_windows_vista.htm]].

* http://foxsys.blogspot.com/2009/08/windows-7-user-account-control-uac-drag.html
You might be wondering what are those often sizable {{{pdb}}} files that Visual Studio generates along with your executables. They are called //Program Database Files// and they contain a lot of useful information for debugging. But you knew that already! PDB files serve two distinct purposes:
# They are used by the compiler to add debug information to your binaries incrementally. 
# They can be used to debug crash dumps of a build.

''Incremental Compilation with Debug Information''
A PDB file is generated (or updated) for every build of your code and it stores a database of the debug information that is already present in the current intermediate //object// files, and thus in the current binary. With the program database, the compiler can keep track of debug information and only add what is needed in an incremental build.

''Deferred Debugging''
This is really cool! You know when a program crashes in Windows and you get that annoying message prompting you to send a crash report to Microsoft? Well, If you have the PDB file of the build that produced the crash dump, you can get very detailed information of the state of your program when the crash occurred.  

''What's Inside''

A PDB file contains the following information for native code (unmanaged C++):
* Public, private, and static function addresses
* Global variable names and addresses
* Parameter and local variable names and offsets where to find them on the stack
* Type data consisting of class, structure, and data definitions
* Frame Pointer Omission (FPO) data, which is the key to native stack walking on x86
* Source file names and their lines
That's a lot of detailed information about your source, so you definitely don't want to give it away to just anyone!

The file format of PDB files is a closely guarded secret. In fact, if you knew exactly how to interpret PDB files, you could use them to carefully reverse engineer a binary without the source code. However, Microsoft gives developers some tools and an API to interact with PDB files and carefully debug applications.

* [http://stackoverflow.com/questions/1449060/what-is-the-usage-of-pdbs-program-debug-database]
* [http://www.wintellect.com/CS/blogs/jrobbins/archive/2009/05/11/pdb-files-what-every-developer-must-know.aspx]
* [http://www.wintellect.com/CS/blogs/jrobbins/archive/2009/08/22/how-many-secrets-do-net-pdb-files-really-contain.aspx]
* [http://www.wintellect.com/CS/blogs/jrobbins/archive/2009/05/26/visual-studio-remote-debugging-and-pdb-files.aspx]
* [http://www.wintellect.com/CS/blogs/jrobbins/archive/2009/05/29/keeping-specific-pdb-files-from-loading-in-the-debugger.aspx]
* [http://www.wintellect.com/CS/blogs/jrobbins/archive/2009/06/19/do-pdb-files-affect-performance.aspx]
* [http://www.wintellect.com/CS/blogs/jrobbins/archive/2009/08/31/correctly-creating-native-c-release-build-pdbs.aspx]
* [http://msdn.microsoft.com/en-us/magazine/cc301459.aspx]]
* [http://msdn.microsoft.com/en-us/library/yd4f8bd1.aspx]]
Python has become my favorite language as for now. The latest and greatest version of Python is Python 3, but I admit that I am still a little scared to switch to this new release. Python 3 is indeed a major departure from Python 2.x and it is not backward compatible. I have such a huge number of sophisticated Python applications than I am afraid it will take a huge amount of work to update all my old code to the new release. 

So I thought I would install Python 2.6, which is a "bridge" version designed to help develpers transition more smoothly to Python 3.0. Well, I must say that Python 2.6 caused me all kinds of trouble and I find myself often going back to run things in Python 2.5! Here are a few things that break:
* there is no binary version of NumPy for Python 2.6. Luckily I was able to make it work manually after I installed Visual Studio 2008 Professional
* py2exe simply doesn't work with Python 2.6 and complains about not finding //msvcp90.dll//. I Googled this issue extensively and it seems that the only solution for now is to revert to Python 2.5.
* PyScripter, which is my favorite IDE for Python under Windows, does not work with Python 2.6. //Update:// Luckily the new version works fine.
I had to go through a few woes to get some python packages to install properly on Leopard. The problem originated from a conflict between two different installations of python:
* The version of python that comes bundled with Leopard stores its packages in:
* The bundled version however, does not work very well with many external packages, such as //wxpython//
* When you install the version of python on [[www.python.org]] the default path for packages, becomes:

Now, when you install new packages downloaded from the internet they may install in either of these paths depending on how smart the installer is. As a result, you may end up thinking that you package installed correctly, but they may not be visible from your python interpreter.

There are essentially two ways to solve this problem:

//First Solution//
* Find the path that the active python interpreter uses to find its packages. The following python script will do the job:

import sys

for pathItem in sys.path:
	if pathItem.find('site-packages') >= 0:
		print pathItem

* create a text file and call it//<name of package>.pth// and add the following line:
import site; site.addsitedir('/path/to/folder/containing/the/package/')

//Second Solution
* simply copy the folder to the active site-packages folder (you can find it using the method in the first solution)
Setting up a Qt project in Visual Studio by hand can be extremely diffcult. Luckily, //qmake// can save you much trouble by generating Visual Studio projects automatically with the command line:


qmake -t vcapp nameOfProject.pro


Or, if you want to create a library


qmake -t vclib nameOfProject.pro


Alternatively, you can create a Qt project directly from Visual Studio's IDE with the Qt Add-in.

What makes Qt project particularly tricky is that Qt uses a [[meta-object design|http://qt.nokia.com/doc/4.0/templates.html]] that requires classes that use Qt functionality to be preprocessed by specific command line tools before they are fed into the actual build process. If you were to set a project manually, you would also have to setup you source files to interact with Qt tools appropriately, which is not easy. 

On the other hand, in order to have //qmake// generate a Visual Studio project, you must first create a Qt project file with extension //.pro//. Unfortunately, qmake does not understand all the proeprties of a Visual Studio project, and you often have to manually tweak project setting in the IDE, before your project can compile correctly.
I rambled before about [[OnLive|OnLive]], an interesting yet daring experiment of on-demand game streaming. This time, I would like to write some thoughts about another very interesting experiment """-- Quake Live""" by id. Quake Live brings forth a new business model where you play a full game in the browser for free and the revenue stream is provided by in-game ads. And, here we are not talking about your run-of-the-mill browser game in Flash! Rather, Quake Live is a special version of the classic Quake 3 Arena running entirely in your browser. To play it, you simply sign up on the Quake Live website, dowload a small plug-in, and in a matter of minutes you can start fragging other players.

Although [[in-game advertising|http://en.wikipedia.org/wiki/In-game_advertising]] is not entirely new and there have been several related experiments in the past, I believe that Quake Live is peculiar in many ways. Since the game runs through the browser, and of course you are going to be always connected when you play, the game developer has tighter oversight over player's game experience. This is important, because advertisers want good user metrics to decide whether or not their ad is effective. On the other hand, the game is free, there is no ridiculous DRM that encumbers the gamer, and being a browser app, users may be more open to accept the ads.

Only MMOs offers a similar opportunity for developers, but this model is generally not possible for other genres. So it is interesting to see an experiment that may open the possibility of free ad-supported games for something like a first person shooter.

A few years ago, Ubisoft made [[another experiment|http://www.trustedreviews.com/gaming/news/2007/09/03/Ubisoft-Releases-Free-Ad-supported-Games/p1]] in which it gave away for free an ad-supported version of some of the high-caliber games in its catalog. Although a free game with ads is better than a full-priced game with ads like [[Battlefield 2142|http://www.gameinformer.com/News/Story/200610/N06.1018.1149.03262.htm]], that experiment was not entirely successful. Here are a few reasons:

* Ads are instrusive in these games.
* The ads are not consistent with the game world.
* Those games are huge to download.
* Advertisers don't get good feedback on their ads.
* These games still need a relatively powerful machine to run on.

One aspect of Quake Live, which brings it a step closer to the model devised by OnLive, is that you can play the game virtually anywhere. Yet, in contrast to OnLive, the end-user does not need a monster broadband connection to play, and at the same time the game provider does not need a specialized infrastructure that is expensive to maintain and may not scale well. The business model of Quake Live has also some of the advantages of OnLive and other technologies for digital delivery. Specifically, the developer can deliver the game directly to players and there aren't any intermediaries that sift the revenues of the games. However, it is not clear whether a free game with ads is more profitable than a game with low productions cost for a fee. In this regard, I should point out, that Quake Live is a spin off of an existing game delivered with new technology, so production costs are expected to be fairly low.

What is special about Quake Live and makes it an interesting case for this distribution model, is that it is a kind of game in which ads are actually not that annoying for gamers. In fact, the original Quake Arena is very similar to an online cyber-sport with a strong competitive component and hence watching ads on the walls of the arenas does actually make it feel more like a //real// sport than a mere multiplayer game.
I found a very interesting [[link|http://www.its.caltech.edu/~sjordan/zoo.html]] that lists all known quantum algorithms. The list is compiled by Stephen Jordan, a postdoc at CalTech.
There are a number of events that may put the Finder in a sort of inconsistent state, making it act strangely. In most cases, the problem is caused by:
* broken Finder preferences
* broken file permissions

''To reset the Finder preferences''
* delete or rename the file {{{/Library/Preferences/com.apple.finder.plist}}}
* delete the file {{{~/.DS_STORE}}} (using the terminal, since this file is hidden)
* delete the file {{{~/Desktop/.DS_STORE}}} (using the terminal, since this file is hidden)
* log out

''To repair file permissions''
* open the //Disk Utility// application
* select the primary disk in the left panel
* choose //repair permissions//
In Principle Mac OS X should be able to find any Windows machine on your local area network automatically and add it to the list of shared resources in the Finder. Unfortunately, more often than not, this feature does not work correctly. This is mostly a Windows problem, due to the way the computer name is advertised by the OS. However, there is a very simple and quick way to resolve this issue if you know the IP address of the machine you want to connect to. Say, the IP of the target machine is, then you proceed as follows:
*   In the Finder Menu, select Go->Connect to Server... (command + K)
* Type {{{smb://}}} in the dialog that shows up
* Enter a user name and password for the machine if required

The IP of the remote computer now gets added to the list of  shared resources in the Finder and you can browse its contents as usual.

Here we used the Samba Service on Mac OS X, which a tool that allows Unix-like system to interface with machines on a Windows Network
One of the majors updates of Visual Studio 2010 over earlier version, is a completely overhauled Intellisense engine. In the past Intellisense used its own custom database engine and format to collect, query and store code intelligence information for your code. The biggest drawback of that approach, was that the custom database engine could not deliver the speed and reliability of a SQL general engine, and importantly that the generated database would get corrupted pretty easily. Instead, VS2010 relies on a standard SQL database for code intelligence, making Intellisense faster, more reliable, and more thorough. Still, VS2010 does not use a full fledged version of SQL Server (which also ship with the IDE), but uses a more restricted version of the engine called SQL Server Compact 3.5 [1]. As a result if you ever decide to uninstall this program from your system Visual Studio's Intellisense won't work anymore.

# http://www.microsoft.com/downloads/en/details.aspx?FamilyID=dc614aee-7e1c-4881-9c32-3a6ce53384d9&displaylang=en 
File [[locks in Mac OS X|File Locking in Mac OS X]] can often cause trouble an frustration. Sometimes it feels like Mac OS suddenly locks files and folders for no apparent reason. One recurring problem is when a folder under subversion gets locked. When this happens, almost all operations with svn lead to the error: 
{{{svn: Can't move '.svn/tmp/entries' to '.svn/entries': Operation not permitted}}}

As suggested [[here|http://blogs.noname-ev.de/commandline-tools/archives/33-svn-Cant-move-.svntmpentries-to-.svnentries-Operation-not-permitted.html]] the solution is as usual to recursively unlock the working directory under Subversion with 


chflags -R nouchg *


However, it may not be obvious that the problem was in fact being cause by file locking in Mac OS X!

Typically, when this problem occurs, you also have to cleanup Sunversion's own locks using {{{svn cleanup}}}
My friend again introduced me to a very cool topic about holograms. This time is a technique for creating holograms ''by hand''. Unbelievable! The technique was introduced by William Beaty and is called [[scratch holography|http://www.eskimo.com/~billb/amateur/holo1.html]]
Train of Thought is my personal cyberspace to ramble about science, research, technology, and other stuff that tickles the mind.

Here are a few other interesting facts about Train of Thought:

* //Train of thought// is an English expression that refers to the succession of ideas that persistently swarm our minds.
* Apparently the term [[train of thought|http://en.wikipedia.org/wiki/Train_of_thought]] was introduced long ago by Thomas Hobbes.
* Train of thought is a synonym for //stream of consciousness//
* Train of Thought is also the title of the seventh studio album by [[Dream Theater|http://www.dreamtheater.net/]].

[[Copyright Notice]]
There are several ways to set application defaults in Mac OS X and each one may produce a slightly different behavior.

''Setting Application Default for a Single File''

//Standard Way//
Right click on a file, select {{{Open With->Other}}} make sure the option {{{Always Open With}}} is checked, and pick the application you would like to use to open the file. In the future this particular file will always open with the specified application.

//Quick Way//
Right click a file with the OPTION key pressed, and select {{{Always Open With}}} and proceed by picking the application.

''Comprehensive Solution''
The previous solution will affect a single file only, which might be confusing since other operating systems behave differently. Instead, if you want to affect all files of a particular type you have to proceed as follows:

# Select a file with the desired extension.
# Open the inspector for that file (select {{{Get Info}}} in the right click menu or simply press COMMAND + I)
# In the inspector window, expand the {{{Open With}}} Section 
# Pick the application you want from the list or select it file system using the {{{Other}}} option.
# Select {{{Change All}}} to apply the selection to all files of the same type.

* If you have assigned a specific application to an individual file using the first method, this choice will override the //comprehensive selection//. 
When typesetting a paper in Latex you may need to prepare your document in either Letter or A4 format depending on the venue you are submitting to. These are the commands to set the page format for the IEEE style:

% use this line for letter sized paper
\documentclass[letterpaper, 10 pt, conference]{ieeeconf}

  % Use this line for a4 paper
\documentclass[a4paper, 10pt, conference]{ieeeconf}
The option of exposing hidden files in the Finder is not available in Finder's preferences, so don't look there. The only way to do it is by typing the following in the Terminal:

{{{defaults write com.apple.finder AppleShowAllFiles TRUE}}}

and then

{{{killall Finder}}} 
Ramblings about technology, research, and other stuff that tickles the mind
Train of Thought
When Windows begins the shut down procedure it sends a //kill// signal to all running processes. A process that receives the //kill// signal is required to stop all pending operations and exit as soon as possible. This mechanism is designed to give a chance to all open applications and services to close cleanly. However, if an application or service takes too long to exit, or it fails to do so completely, Windows will take a very long time to shut down.  Typically, Windows will wait a couple of minutes after which it will terminate all processes that failed to close, but the wait time may be too long for most people's patience. Unfortunately, troubleshooting and isolating applications that fail to close and slow down the shut down procedure is not always easy. 

Windows Vista introduced a very neat feature that reports all applications and services that take too long to exit, during shut down. You can see this log by going to:

''Control Panel->Performance Information and Tools->advanced Tools->Programs are causing Windows to shut down slowly. View Details''

This brings up a Window that list of all processes that slowed down your shut down procedure in the past. This is very useful information to go about fixing your shut down problems.
A lot of game developers complain that programming for the Playstation has always been very tedious and unrewarding. The SDK is not very polished and Playstation's hardware on all models is very tricky. Of course, programming for the Playstation 3 is no exception. If that were not enough, the Cell processor that lies at the heart of the Playstation has a very unusual architecture and many companies resist adopting it on their workstations because it is too difficult to program for. You would think that Sony should feel sorry about this shortcoming of their platform. But No, Kaz Hirai in a recent [[interview|http://kotaku.com/5135863/kaz-hirai-feels-faint-needs-to-lie-down]] boasted this as a feature. He claims that the hardware will last longer on the market, because it takes more time for developers to figure it out. Is he drunk? Maybe the fact that Sony is dead last in the console wars is a clue that this strategy is just ridiculous. As John Carmack pointed out in an older [[interview|http://www.gamespy.com/articles/641/641662p1.html]] Sony might have thought that being the leader in the console space during the previous generation would give them  the prerogative to force developers to do as they please. However, lackluster sales of their platform and games should have proven that this is not a good strategy. Interestingly, most multi-platform games like GTA IV look worse on the PS3 than XBOX 360 despite the fact that the PS3 has better theoretical peak performance. In my opinion this is a combination of bad engineering, bad marketing, and bad PR. Today we learned how not to do business.
I found an interesting [[article|http://www.informit.com/authors/bio.aspx?a=ff6d423f-42eb-4cb8-b20c-b0d0665de181]] that discussed some subtle differences between static and dynamic memory allocation and and introduces a design pattern that addressed some of the problems caused by dynamic allocation in terms of performance. The article applies well to software targeted at mobile platfomrs. The article is by Bruce Powell Douglass, an experienced engineer on Real-Time systems. 
As with any external library, the Boost libraries that require linking can be bound to a program statically or dynamically. In Visual Studio by default the Boost libraries are linked statically to executables. The benefit of this choice is that the binary of your programs do not depend explicitly on any Boost DLL and that is good since Windows does not have a good way to resolve dlls, unlike Mac OS X or Linux. One thing to keep in mind, though, is that the boost libraries build for static linking have the prefix {{{lib}}} on Windows. 

If your project is a dll library then you must link against the DLL version of the boost, and on Windows it means that you must have compiled the version of boost without the {{{lib}}} prefix. Note that, despite the fact that extension is always {{{.lib}}} for the DLL builds, these are just import library that act as proxies to the actual dlls that must be loaded at run-time.  

If you wish, you can build a regular executable against the dll version of Boost by definining the preprocessor macro {{{BOOST_ALL_DYN_LINK}}}. Note that if you want to build against the dll version of a Boost library, you must also link against the dll version of the run-time libraries.

All this logic is resolved at compile time by the {{{auto_link.hpp}}} header in Boost:
#if (defined(_DLL) || defined(_RTLDLL)) && defined(BOOST_DYN_LINK)
#elif defined(BOOST_DYN_LINK)
#  error "Mixing a dll boost library with a static runtime is a really bad idea..."
#  define BOOST_LIB_PREFIX "lib"
I recently found and watched a rather historic [[interview|http://video.allthingsd.com/video/bill-gates-and-steve-jobs-at-d5-full-session/60C4F9FA-9AD5-4D04-8BB6-015AEBB1C052]] of Steve Jobs and Bill Gates sitting right next to each other to talk about their companies, their differences, and a few other topics about technology. The interview was very interesting and insightful, but one thing struck me in reference to an earlier [[tiddler|Microsoft Office: A Case of Over-Ambitious and Over-Engineered Software]] I wrote about Microsoft software. The questions was something like "what do you think you rival has done better in his career" to which Steve Jobs answered by saying that Bill Gates' distinction was his ability to find good business partners. Yes, Microsoft has always been very successful at gathering partners for its products. And that is a good thing. The problem, however, is that Microsoft typically promises too much to its partners and more often than not it fails to deliver. Microsoft clearly tries to satisfy too many people at the same time and it is practically impossible to write good software that does so many different things well at the same time. The partnerships also put Microsoft under the pressure of delivering its products within short development cycles.

Going back to the interview, it would have been a great event to remember if it weren't for the two incompetent hosts. The hosts did not have charisma, they interrupted too often and didn't seem to know how to ask good questions. The interview felt too improvised and unscripted. Now, unscripted interviews are great when the host has charter and knows how to keep the discussion interesting, but these ones clearly did not have these qualities. I felt bad for luminaries like Jobs and Gates having to cope with those rookies.
Stickies a great built in application in Mac OS X that lets you create little //sticky// notes that always float on top of your screen
If you follow my [[guidelines|Environment Variables in Mac OS X Leopard]] for setting up environment variables in Mac OS X, you should be able to get a consistent value for your environment variables in all circumstances. However, in some cases you may find that XCode will ignore your up-to-date settings for some environment variables. More specifically, when you type a setting using an environment variable, such as {{{$(BOOST_ROOT)}}} you will be surprized to see that its value does not match the current settings. This is why. In XCode's code target settings you can actually hard code the value of an environment variable that overrides your system settings. These are called //User Defined Settings// and can be added at will to any project. They generally appear at the very end of your target settings page. Just remove the that setting and problem solved!

NOTE: user defined variables are stored in {{{project.pbxproj}}} in the xcodeproj folder (you can access it only from the terminal)
Eigen is probably one of the bestC++ math libraries yet. It is clean in design, easy to use and very efficient. Its efficient design, however, brings some subtle challenges to the developer. In fact, you can't simply declare Eigen's data structures as class members without running into trouble. When you do so you probably get an unusual run-time exception for seemingly no reason. These exceptions are triggered when you try to construct the containing class. The problem is that Eigen wants to ensure that it's data structures (vectors and matrices) are laid out in memory for best cache efficiency and uses a run-time mechanism to ensure that this is the case. Normally when you instantiate a class a good optimizing compiler pads the class instance with zeros to ensure proper alignment for the entire class, but proper alignment is not guaranteed for its data members. This is why Eigen complains. The solution is to override the {{{new}}} operator to allocate the class members in a way that suits Eigen. Luckily Eigen provides a the macro {{{EIGEN_MAKE_ALIGNED_OPERATOR_NEW}}} to automate this process. Just place this macro at the beginning of your class declaration and you are done. You shouldn't in general mess with the {{{new}}} operator, so Eigen's requirements should interfere with most of your classes. Very clever!

* http://eigen.tuxfamily.org/api/TopicStructHavingEigenMembers.html
When you are using a Subversion repository on a remote server under a SSH connection, the authentication process is somewhat cumbersome and may prompt you for a password multiple times for any Subversion action. Because of this, on the Mac many Subversion front ends don't work correctly reporting only a cryptic network error when they fail. The solution is to use ditch password prompts completely by setting up automatic authentication. This is not  very secure choice, but it works!

[[This|http://blog.macromates.com/2005/subversion-support-and-ssh-key-pairs/]] page details how to do it.
[[Sysinternals|http://technet.microsoft.com/en-us/sysinternals/default.aspx]] is a suite a very powerful and thorough utilities that can help you investigate a diagnose the Window Operating System down its most minute details. These tools are very useful both for troubleshooting tricky problems and also to get some hands-on understanding of various low-level details of the operating system. A few individual tools that I find particularly interesting are:

Shows you an extremely thorough list of all processes and dlls that are set to be loaded at startup. This is a great tool for finding which tasks slow down Windows' startup, but it is also the most comprehensive list of all the locations and registry entries that can be used to load applications at startup.

''Process Explorer''
This a very sophisticated "Task Manager" that shows you in depth information for all running processes.
txtUserName: Gabe
chkAutoSave: true
chkSaveBackups: false
I am a believer of David Allen's [[Getting Things Done|http://en.wikipedia.org/wiki/Getting_Things_Done]] philosophy. The basic idea is to record the myriad of tasks that we have to perform on a daily basis away from our minds and on an external medium, which helps us focus on actually getting things done, rather than having to overload our heads with the burden of remembering all of them. However, in today's fast paced digital lifestyle, good old paper planners are not good enough. We need a more dynamic medium that enables us to create, modify, and group tasks with ease.  

For some time I have been using [[TiddlyWiki|http://www.tiddlywiki.com/]] (used for this page) and its scientific variant [[ASciencePad|http://math.chapman.edu/~jipsen/asciencepad/asciencepad.html]] for this purpose. However, while I am very satisfied with these tools, I have been also looking for something more lightweight """--""" something that would function more like a digital TODO list instead of a detailed personal journal.

I recently found a few interesting software solutions for lightweight task management. The first one is a slick application for the Mac called [[Things|http://culturedcode.com/things/]]. This application looks great, and it won several awards, but unfortunately it costs around $50, it only works on Mac, and it is not a truly portable solution, unless you are willing to spend even more to buy its iPhone companion. A more interesting solution is the oddly named web-service called [[Remember the Milk|http://www.rememberthemilk.com]]. Remember the Milk (RTM) is a free online service that allows you to create and organize tasks with ease. The main interface is on their website, but the service also supports a number other protocols that we have come to expect in Web 2.0. My favorite way to interact with RTM is through the Google Gadget that you can enable right inside Gmail. Creating task is extremely easy; you simply need to type it and RTM will automatically parse it using a simple form of NLP that recognizes times and dates and also supports a few special [[mark ups|http://blog.rememberthemilk.com/2009/09/introducing-smart-add-a-smarter-way-to-add-your-tasks/]] to specify additional details about the task , such as category and tags. RTM, being a true Web 2.0 service, also allows you to integrate your task with a variety of common online and offline calendar software, and can also send you reminders in many different ways, using email, SMS, and IM. 

While you can do almost everything in RTM without actually going through their website, there are two little improvements that I would like to see in RTM's Google Gadget:
* On the main web interface you get auto completion when you tag your task, but this feature is not available in the Google Gadget.
* RTM's Google Gadget in Gmail is pretty narrow and you can read only about 24 characters of each task in the list. Of course you can expand each task by clicking on it, but it would be nice if you could get see the full text of each task in a tool tip box by hovering the mouse over it.
[[Task management tools|Task Management Tools]] are pretty good when you need to organize your own daily or weekly schedule. I already discussed a few of the software solutions that can help you do just that. However, another question that I have been pondering for a while, is whether or not these tools are effective for managing small personal software projects. 

The topic of managing software is a big one and a major focus of software engineering. There are many theories of how software project should be managed and there are also several tools specifically designed to aid the management of large software, such as [[Microsoft Project|http://office.microsoft.com/en-us/project/default.aspx]] and [[Open Workbench|http://www.openworkbench.org/]] among many [[others|http://en.wikipedia.org/wiki/List_of_project_management_software]]. But what about small projects? These established applications for software management seem to be rather overwhelming for something small. Yet even smaller projects can benefit with some basic level of project management. 

These are a few of the things that need to be managed for software projects at all levels:
* A list of TODOs organized in a hierarchy and with different priorities.
* A list of bugs and notes on how to isolate them, their status and possible solutions.
* A list of things that have been patched up quickly or might not be yet entirely correct, and need to be addressed at some point. This are generally tagged as FIXME in code.

What I am looking for is an application that is fairly lightweight and easy to use as other task management tools, but with a more specific focus for software project. I have yet found anything like this.
3D Realms used to be a very well-respected game developer in the '90s. They developed nothing-less-than Duke Nukem 3D, which is an absolute classic of the first person shooter genre, and they also helped produce some other notable milestones such as Wolfenstein 3D and Max Payne. 3D Realms was definitely a household name for game enthusisasts. Those who are more knowledgeable in gaming history will remember that this company, which was called Apogee Software back then, was also one of the first prominent shops to adopt the shareware model of distribution. 

However, 3D Realms became famous also for its habit of breaking promises and missing deadlines. First they promised the game Prey that stunned the audience at E3 for its innovative use of portals. But Prey never came -- at least not directly from 3D Realms. After a few cancellations and restarts, development was finally assigned to Human Head Studious and the game finally came out almost ten years after it was announced. Then there is Duke Nukem Forever...The game was first announced 12 years ago with grand fanfare on the height of the success of Duke Nukem 3D. Duke Nukem Forever was restarted and rewritten multiple times and with different engines. George Brussard, 3D Realms' head, assured gamers that develpment on Duke Nukem Forever was proceeding well, but all we have seen during this long gestation process were a few not-so-impressive teaser trailers. In the meantime, gaming has changed drammatically, a whole generation of consoles came out and departed, and a crowd of new gamers, who had never heard of Duke Nukem, joined the ranks of the game buying population. It has been so long that the Duke Nukem franchise is now already obsolete. What happened during all this time?

Whatever happened, it is not surprising that 3D Realms today has officially [[closed its doors|http://www.shacknews.com/onearticle.x/58519]]. Unfortunately, this is a very sad news for gamers, but undeniably the guys at 3D Realms did not know how to do business. As a matter of fact, I recently had confirmation of this. In occasion of Wolfenstein's 17th birthday, I discovered this [[page|http://www.buy3drealms.com/wolfenstein3d.html]]. Wolfenstein 3D is sure a game of great historic importance, but no reasonable man would try to sell you an ancestral game that might not even run on modern machines at 15$!
There is an important, but easy-to-forget difference between {{{su}}} and {{{sudo}}} in Unix environments. 

{{{su}}} gives super user privileges to the current user by essentially logging him into the server as root. {{{su}}} requires the root password.

{{{sudo}}} allows the current user to perform single command with super user privileges and is considered the more secure approach. {{{sudo}}} requires the current user to be in the sudoers list of the server and requires the user's password (not root!). 

To add a user to the list of sudoers you must use the {{{visudo}}} command with super user privileges. This commands opens {{{etc/sudoers}}} in the {{{vi}}} editor with additional syntax checking and prevents multiple users from changing this file simultaneously.

* [[http://aplawrence.com/Basics/sudo.html]] 
''Automatic Type Inference''
The modern C++03 and the upcoming C++0x standards for C++ define the {{{auto}}} keyword that allows the compiler to infer the type of a variable automatically at initialization. For instance, you can write:

auto myVariable = 5;

Which will automatically set the storage type of {{{myVariable}}} to an integer. Now, why would you ever need type inference in a strongly typed language like C++? It seems to be a bad idea! Well, take a look at the following example (from Wikipedia):

auto someStrangeCallableType = boost::bind(&SomeFunction, _2, _1, someObject);

Like in this case, there many instances in C++ code where the type of a variable is extremely hard to figure out and write out explicitly. As always, these are advanced features of the language that only expert programmers should use, and should do so only when it improves code quality.

C++0x also introduces the keyword {{{decltype}}} that evaluates the type of an expression:
auto myVariable = 5;
std::vector< decltype( myVariable ) > myContainer;

''Support in Visual Studio''
These keywords and their respective semantics are fairly new and relatively few compilers support them yet. For instance, both the {{{auto}}} and {{{decltype}}} keywords are only available in the recent Visual Studio 2010 and not in previous versions. Beware, that the {{{auto}}} keyword is available in previous version of Visual Studio, but its semantics is different as explained [[here|http://msdn.microsoft.com/en-us/library/6k3ybftz.aspx]]
Here I am going to describe a very subtle issue with C++ that may lead to a lot of head-scratching.

Take a look at this apparently innocent C++ program:
#include <iostream>

class A
	A( int ID = 0 ):
	  _ID( ID )

	void printID() const
		std::cout << _ID << std::endl;

	int _ID;


int main( int argc, char** argv )

	A myA();



When you try to compile it in Visual Studio, you will get this error message:
Error	1	error C2228: left of '.printID' must have class/struct/union
The compiler is essentially telling you that the {{{myA}}} in {{{myA.printID()}}} is not a class instance. In fact, you may think that {{{A myA()}}} would instantiate an instance of class {{{A}}} using the default constructor, but it doesn't. The problem is that class A does ''not'' define a default constructor and the only one available is an initializing constructor that takes an {{{int}}}. Instead the compiler will treat that line as the declaration of a function called {{{myA}}} that returns an {{{A}}}. The source of this issue is that class A provides a default value as the sole argument of its initializing constructor which makes it interferes with the default constructor when the class is instantiated with no arguments. This is not a bug in the compiler and is required by the C++ standard. This behavior is called //the most vexing parse//.

If you look at the disasembly of this program, you'll notice that no code is being generated for the class instantiation:
int main( int argc, char** argv )
011E1380  push        ebp  
011E1381  mov         ebp,esp  
011E1383  sub         esp,0C0h  
011E1389  push        ebx  
011E138A  push        esi  
011E138B  push        edi  
011E138C  lea         edi,[ebp-0C0h]  
011E1392  mov         ecx,30h  
011E1397  mov         eax,0CCCCCCCCh  
011E139C  rep stos    dword ptr es:[edi]  

	A myA();

011E139E  xor         eax,eax  
Notice that I commented out the line {{{myA.printID()}}} to allow compilation in this case.

To solve the problem simply define the class instance with {{{A myA}}} and that will resolve the problem.

Let's look at the disassembly again after this modification:
int main( int argc, char** argv )
011B13D0  push        ebp  
011B13D1  mov         ebp,esp  
011B13D3  sub         esp,0CCh  
011B13D9  push        ebx  
011B13DA  push        esi  
011B13DB  push        edi  
011B13DC  lea         edi,[ebp-0CCh]  
011B13E2  mov         ecx,33h  
011B13E7  mov         eax,0CCCCCCCCh  
011B13EC  rep stos    dword ptr es:[edi]  

	A myA;
011B13EE  push        0  
011B13F0  lea         ecx,[myA]  
011B13F3  call        A::A (11B1014h)  
011B13F8  lea         ecx,[myA]  
011B13FB  call        A::printID (11B1023h)  

011B1400  xor         eax,eax  
Now the code compiles as expected and we can see from the disassembly that code is being emitted for the definition of the class instance.

* http://stackoverflow.com/questions/4175971/error-c2228-left-of-size-must-have-class-struct-union
* http://en.wikipedia.org/wiki/Most_vexing_parse
For years, IntelliSense """--Microsoft's code intelligence engine--""" has been one best in the business, providing VisualStudio with a level of code completion unmatched by most other IDEs. In particular, IntelliSense is one of the very few engines that can deal with templatized C++ code. Yet, IntelliSense also has been traditionally riddled with its own share of problems: database files getting corrupted, code-completion suddenly stop working, and many others. In fact, IntelliSense is sometimes so fragile, that other "dumber" IDEs end up being better, because they at least always provide the same level of dumb code-completion consistently. These problems are substantial enough that other companies had relative success selling alternative engines. The most popular example is [[Visual Assist X|http://www.wholetomato.com/default.asp]] by Whole Tomato Software. 

Micorsoft, of course, is very aware of the problem and it has been working for years on a complete overhaul of the technology behind IntelliSense as detailed [[here|http://blogs.msdn.com/b/vcblog/archive/2009/05/27/rebuilding-intellisense.aspx]] by one of Visual Studio's engineers. Several years after the last release of Visual Studio in 2007 and a lot of upset customers later, Microsoft finally released it's major revision of IntelliSense with Visual Studio 2010. Is this the fix that we were all waiting for? Resoundingly yes! I ported a large and complex project to Visual Studio 2010 with very tricky templatized C++ code and the code-completion not only is far more thorough than previous releases, but it also seems to work consistently well.
The iPad is certainly one the worst kept secrets by Apple. Everybody knew that Apple was going to release some kind of tablet; there were even pictures of the device floating around on the web for quite some time before the official announcement. Nonetheless, the excitement surrounding Apple products is always rather big...

 When I saw Jobs keynote online, I was """--like most--""" rather unimpressed by the iPad. Even Job's famous //reality distortion field// was not strong enough to convince me that the iPad is something other than an inflated iPhone. Yes, it is bigger and faster, but the user experience and the modes of interaction that it offers are the same as the iPhone (and the many touch-based devices that try to be like the iPhone). Perhaps, the only innovation that I immediately thought it may hold some promise is the iBooks Store. The iBooks Store may in fact mirror the success of iTunes and the App Store, but it still isn't an innovation of the //device//, rather it is an innovation that comes //through// the device. 

However, after thinking about the iPad for some time I realize that the it //does// have potential! There are, for instance, many fantastic applications on the App Store that cannot achieve their full capability simply because the iPhone's screen and, of course, the space for interaction are too small to be practical. The mere fact that there is a larger screen on the iPad opens up very interesting possibilities for developers. Just a few days ago, I was watching a few videos by keyboardist extraordinaire Jordan Rudess where he showcased a bunch of extremely cool applications for music synthesis and loop generation. Those apps are sure pretty amazing, but it also pretty ridiculous to use them for anything serious on that tiny 3.5" display. However, when you inflate that screen to 9.7" the game changes completely.
* This is a very useful [[reference page|http://tiddlywiki.org/wiki/TiddlyWiki_Markup]] to keep around, when editing Tiddlers
* [[Version]]
Once again the Windows Registry monster produced an obscure problem and once again Google provided the solution. If you have trouble installing the latest Windows SDK it might be due to a number of different problems. Here's a few links that provide several possible solutions:
# http://ctrlf5.net/?p=184
# http://notepad.patheticcockroach.com/1666/installing-visual-c-2010-and-windows-sdk-for-windows-7-offline-installer-and-installation-troubleshooting/

The one that worked for me required me to uninstall the Visual Studio 2010 redistributable before installing the redistributable. The installer still exits with an error message, but it won't roll back its changes and as far I am concerned code the installation was successful. 

* http://stackoverflow.com/questions/1901279/windows-7-sdk-installation-failure
Dealing with linker errors in C++ is always a challenge and any C++ programmer knows this all too well. What makes it so difficult is that the linker is typically unable to give meaningful guidance as to what has caused the problem in the first place. So, in most cases you are left to figure out the problem by trial and error, and little more. 

Here I describe some of the common problems that lead to linker errors and a few tips on how to resolve them.

''Symbol not Found''
This is the most common problem. The main job of the linker is to assemble all the individual binary files emitted by the compiler, called object files,  into a single executable binary. To do so, the linker must match all symbols (function or methods calls) to the corresponding implementation found in any of the objects files fed into it. This error occurs, when the linker cannot find a proper implementation for a given symbol and it may be caused by:

# You forgot to actually implement function or method. Often, when you create you separate the declaration from the implementation of class, you may forget to implement some of its most trivial methods, such as the default constructor, or the destructor.
# The signature of the implementation does not match the declaration. For instance it not uncommon to declare a class method with the //const// modifier, but forget to specify it in the actual implementation of the class.
# You implemented the symbol, but you forgot to include the CPP file in your project or makefile.
# The implementation of a symbol is located in an external library, which is not being linked with you project.
# The implementation of a symbol is located in an external library that you are linking to,  but the given binary does not match the architecture of the main project 

Problems (4) and (5) are typically the most common as well as the most difficult to resolve

''Multiply Defined Symbol''

''Missing Virtual Function Table''
Namespace are great feature of C++ that helps developers keep things tidy and avoid annoying name clashes in their code. Most C++ programmers should be familiar with //named// namespaces, but interestingly C++ allows  //unnamed// namespaces as well:
    void foo()
        // do something
Now, the question is: why in the world you would want to do this?

Unnamed namespace allow you to restrict the scope of a definition to avoid name clashes between multiple implementation files. They replace something that can be accomplished with the {{{static}}} keyword in C. Let's explore this concept by example. Consider a header file {{{test.h}}}:
#ifndef TEST_H
#define TEST_H

#include <iostream>
#include <string>

namespace PUBLIC
	void postMessage1( const std::string& message );
	void postMessage2( const std::string& message );


#endif	// TEST_H
and now define the the functions {{{postMessage1}}} and {{{postMessage1}}} in two separate implementation files:
#include "test.h"

	void __postMessage( const std::string& message )
		std::cout << message << " [WARNING]" << std::endl;

void PUBLIC::postMessage1( const std::string& message )
	static unsigned int s_messageCount = 0;
	std::cout << "[" << s_messageCount << "]: ";


#include "test.h"

	void __postMessage( const std::string& message )
		std::cout << message << " [ERROR]" << std::endl;

void PUBLIC::postMessage2( const std::string& message )
	static unsigned int s_messageCount = 0;
	std::cout << "[" << s_messageCount << "]: ";



Both implementation files define a local hidden function called {{{__postMessage}}}. Here the unnamed namespace indicates to the compiler that the definition of this function is confined to that specific compilation unit and the linker is perfectly happy, because each definition is treated as a being in a separate namespace. However, if you omitted the namespace block above the linker will complain saying a symbol is being defined multiple times.Now, if you call {{{postMessage1}}} and {{{postMessage1}}} in your main, their behavior is different, even though the body of the two functions is identical.

Defining hidden functions like this is not necessarily a good practice, but there are cases where it is useful. 

As Python author Tim Peter puts it "Namespaces are one honking great idea — let's do more of those!". 

* http://archive.atomicmpc.com.au/forums.asp?s=2&c=10&t=4171
Command+K................clear the console thoroughly (clear the entire text buffer!)

CTRL+F.......................Page Down
CTRL+B......................Page Up
/<keyword>.................Search <keyword>
n..................................Repeat last search
q..................................exit man page
h..................................show detailed key help (press //q// to go back to the man page)
g..................................jump to beginning
G.................................jump to end

[[Here|HOME and END functionality in a Bash Shell]] I describe how to modify some of these shortcuts
Compared to full-sized PC Laptops, MacBooks have fewer keys on their keyboards. This is often a problem if you are use special keys like Page Up/Down Home/End often while typing. However, this limitation becomes particularly cumbersome if you run Windows on a MacBook, since the worflow on Windows relies more heavily on these keys.

Here are a few useful keystrokes
Insert = Fn + Enter
Break = Fn + Esc

Here is how to perform some common Windows shortcuts
CTRL+ALT+DEL = Fn+Control+Option+Delete
Copy = Fn+Control+Enter
Paste = Fn+Shift+Enter

* [http://manuals.info.apple.com/en/boot_camp_install-setup.pdf]
* [http://www.rethinkit.com/blog/macbook-pro-keyboard-mapping-for-windows/]
One basic feature that is definitely missing in Visual Studio is a built-in shortcut to switch between header and source for C/C++ program. Perhaps, this omission is due to the fact that Visual Studio today is mainly geared toward development in one the many .NET languages available. But still, a large number of developers use Visual Studio to write C++ code, so this kind of negligence is somewhat frustrating. Nonetheless, Visual Studio is an excellent IDE. 

Fortunately Visual Studio can be easily extended using Macros. Being a feature in high demand, there are many macros on the web to do so, but I found [[this|http://www.alteridem.net/2008/02/26/visual-studio-macro-to-switch-between-cpp-and-h-files/]] one to be the best. Here is the macro:


Imports System
Imports EnvDTE
Imports EnvDTE80
Imports EnvDTE90
Imports System.Diagnostics

Public Module CppUtilities

    ' If the currently open document is a CPP or an H file, attempts to  
    ' switch between the CPP and the H file.  
    Public Sub SwitchBetweenSourceAndHeader()
        Dim currentDocument As String
        Dim targetDocument As String

        currentDocument = ActiveDocument.FullName

        If currentDocument.EndsWith(".cpp", StringComparison.InvariantCultureIgnoreCase) Then
            targetDocument = Left(currentDocument, Len(currentDocument) - 3) + "h"
        ElseIf currentDocument.EndsWith(".h", StringComparison.InvariantCultureIgnoreCase) Then
            targetDocument = Left(currentDocument, Len(currentDocument) - 1) + "cpp"
        End If

    End Sub

    ' Given a document name, attempts to activate it if it is already open,  
    ' otherwise attempts to open it.  
    Private Sub OpenDocument(ByRef documentName As String)
        Dim document As EnvDTE.Document
        Dim activatedTarget As Boolean
        activatedTarget = False

        For Each document In Application.Documents
            If document.FullName = documentName And document.Windows.Count > 0 Then
                activatedTarget = True
                Exit For
            End If
        If Not activatedTarget Then
            Application.Documents.Open(documentName, "Text")
        End If
    End Sub

End Module

Paramiko is a library for using {{{ssh}}} and {{{sftp}}} in python. This library is very useful for writing administrative scripts to automate the management of UNIX servers. However, one issue that comes up often is that the server may not allow you to use remote scripts with {{{sudo}}}, giving you an error message similar to:
sudo: no tty present and no askpass program specified. 

One solution is to edit your {{{sudoers}}} file with {{{sudo visudo}}} and add the following line:
Defaults visiblepw
This essentially instructs the {{{sudo}}} command to accept connection without {{{tty}}}.

Another possibly cleaner solution that has been suggested is to use the paramiko command {{{invoke_shell}}} to create a pseudo-shell to run a command on the remote server, but this command crashes if we do this:
channel = ssh.invoke_shell(term='bash', width=80, height=24)
stdin, stdout, stderr = ssh.exec_command(comand)

* http://www.linuxquestions.org/questions/debian-26/sudo-no-tty-present-and-no-askpass-program-specified-877695/
* http://www.sudo.ws/pipermail/sudo-users/2009-August/004142.html
//numpy// is an excellent library for python, but it not always obvious how to accomplish something with it. One of them is how to compute the Euclidean norm or magnitude of a vector.

''Method 1: do it yourself''

''Method 2: use the Euclidean norm function''
''Disable the macro balloon''

Edit the Registry Key

Add the DWORD
DontShowMacrosBalloon = 1

''Add a code guide in the editor''

Edit the Registry Key
HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor

Add the String 
Guides = RGB(128,128,128) 80
While getting ready to enjoy Pixar's upcoming feature Up, I would like to write a few thoughts on what made their previous release so great. Wall-e is not only a great achievement in animation, but it is also a movie that redefines the boundaries of storytelling and what can be done with the CGI medium. For this, Wall-e is definitely one of the most daring big-budget projects in computer animation yet. As [[Jon Anderson of the Washington Post|http://www.washingtonpost.com/wp-dyn/content/article/2008/06/26/AR2008062604139.html#]] puts it:

//The idea that a company in the business of mainstream entertainment would make something as creative, substantial and cautionary as WALL-E has to raise your hopes for humanity.//

So what is so peculiar about Wall-e? I start with a subtle observation. There is something special about the aesthetics of computer animation that makes storytelling with this medium rather different from standard live action features and other forms of animation. In computer graphics """--at least in the domain of photorealistic rendering--"""  There is a strong juxtaposition between the realistic rendition of images with accurately ray-traced lighting and physically plausible animation, against the fictitious nature of the subject matter. This is what makes computer animation compatible with the sensibilities of [[surrealism|http://en.wikipedia.org/wiki/Surrealism]]. There are, of course, other forms of computer generated imagery that do not agree with this notion, but photorealistic rendering is by far the dominant approach for commercial features. The point is that the choice of medium for an animated movie //does// have a substantial impact on how the story is delivered to the audience and on the subtleties of the message that it carries. For instance, imagine remaking classics like The Lion King, or Nightmare Before Christmas with a different medium, but without changing the story, the dialog, the staging, and timing of the movie. The end result would be very different. A claymation Lion King? It would not be half as moving as the original! Now, movie studios today choose to make computer animated movies almost exclusively based on financial considerations and rarely with artistic goals in mind. After all, computer animation typically sells well and it is faster and cheaper to produce compared to other animation techniques and even traditional live action. Yet Pixar does not even have the freedom of choice; computer animation is a prescribed commitment for them. So again what is so peculiar about Wall-e? Wall-e is the first commercial feature that delivers a story that weaves well with the aesthetics of computer animation. It is a story that works at multiple levels. On the one hand, there is the light-hearted tale of a sensitive robot falling in love -- a work of fiction designed to appeal to young audiences. The true message though is very concrete cautionary outlook on humanity which is bleak and chilling. Wall-e is indeed a movie that thrives on contradictions as suggested by Kenneth Turan of the Los Angeles Times:

//Daring and traditional, groundbreaking and familiar, apocalyptic and sentimental, Wall-E gains strength from embracing contradictions that would destroy other films.//

It is this very aspect of the tale that makes the surreal aesthetics of computer animation so appropriate for this movie. 

Speaking of daring, one of the most cherished aspects of the movie is its exquisite first hour with no dialog. Only some blips and oldies do the talking. No speech means that all the story, the emotions, and the humor must be carried solely by the visuals and the animation. No director that understands computer animation and its limits would have even thought of doing that before """--especially when there are millions of dollars at stake""". But then again this is Pixar!
When you define a symbol in C++ whose name clashes with another, you often end up getting a lot of seemingly inexplicable compiler errors. It is therefore very tricky to resolve this kind of problems when they occur. It is particularly frustrating to deal with name clashes, when you mess with symbols defined in //windows.h//. And, this is not a very uncommon event, since //windows.h// defines a lot of global functions and macros with fairly common names. Here are a few problem symbols

* [[Polyline|http://msdn.microsoft.com/en-us/library/dd162815(VS.85).aspx]]
* min and max macros( [[http://support.microsoft.com/kb/143208]] ). define NOMINMAX to avoid problems
Akamai is a download manager that several companies use to distribute their downloadable content. As an example, the entire suite of Autodesk products and trial software is distributed using this method. However, Akamai has a number of quirks and gotchas that may stump most users.

Typically, when you want to download a program that uses Akamai, you get a small executable file that simply runs the download manager. The way Akamai works is that it activates a background process that communicates with your browser and let's manage your download from a web interface. Now, this only works ''if you have your browser actually open on the right page where the download is supposed to happen''. Otherwise, you will just see a small progress bar on the screen for a split second followed by...nothing. NOTHING! The application simply closes without giving the user any feedback. On the other hand, if the download process is successful, you either get a link on the download page in your browser that simply launches the file that was downloaded or Akamai will automatically launch it for your without asking.

If you need to reinstall the application that you download at a later time, or simply want to keep the actual files handy for archival, you need to know where the files are. Again, you get absolutely no feedback as to where your files are. So here are the common download paths that Akamai uses:

''Windows XP''
{{{C:\Program Files\Common Files\Akamai\Cache}}}

''Mac OS X''

On all platforms, you will find one or more folders in Akamai's ''Cache'' label with a numeric ID that actually contain your files.
You would probably never wonder why Visual Studio uses the //Time Accessed// time stamp to detect changes for the incremental compiler. That's because, it usually doesn't have any bearing on your work. However, if you try to develop a Visual Studio project on a non-NTFS file system, you'll notice that this little detail drives the incremental compiler crazy. It for instance, if you use [[MacDrive|http://www.mediafour.com/products/macdrive/]] to create a Visual Studio project on an HFS+ partition. In fact, HFS does not support the time accessed time stamp, so MacDrive simply makes keeps it in sync with the date modified time stamp. 

The reason for all this, is that Visual Studio does not want to lock files that are opened in the editor, so external text editors can modify them. The time modified timestamp only gets updated, when you effectively close a file. 
''The Suckiness of PowerPoint Explained''
PowerPoint is a deeply flawed, overly complex piece of software that even after years of development and several point releases is still broken in several important areas. One of the most broken features of PowerPoint is video playback. The first question is: why? The problem is that PowerPoint does not take advantage of the rich media capabilities of Windows Media Player and it uses instead the far less capable playback facilities built into Windows itself through the [[MCI API|http://msdn.microsoft.com/en-us/library/dd743458(VS.85).aspx]]. While it is a questionable technical solution, there a few reasons why Microsoft might have chosen to do this:
* Windows Media Player is a rather heavy application that might slow down the performance of PowerPoint considerably.
* PowerPoint needs to trigger video on demand to sync playback with other events.
* This design choice was taken long ago, when Media Player  was still running on top of MCI rather than having its own specialized playback capabilities.
* It is a choice consistent with how PowerPoint handles other file formats; it uses system services rather than its own built-in capabilities.
An even bigger annoyance is that the MCI API does not behave consistently on different machines. If the suckiness was consistent at least one could find a reliable work around that would work everywhere. But, no! You can have embedded video that works perfectly fine on your machine, but when you upload your presentation on the machine in the conference room, nothing works anymore. This is because MCI has a modular structure and its actual capabilities are defined by what OS components and what other software is installed on your machine, so effectively every computer is different in this regard.

Videos in PowerPoint can fail in several ways:
* The video does not play at all. In this case, you generally either get a black rectangle or see only the first frame of the video, but you can't play it.
* The video flashes black before or during playback.
* The video looks corrupted.
* The video looks correctly on your machine, but it does not play on an external monitor or projector during a slide show.  

In turn, these problems may be caused by a few different scenarios:
* The video file is not found
* MCI cannot handle the codec required to play the video.
* MCI has the correct codec, but is unable to handle the specific video options you used to encode the video.
* MCI only works correctly only when the resolution of your video is a standard one and the aspect ratio 4:3.
* PowerPoint has trouble resolving the path of the video. If the absolute path, of your presentation and videos is too long PowerPoint may not be able to play your videos. PowerPoint 2007 should not have this problem.  

* [[http://www.indezine.com/products/powerpoint/ppmultimedia2.html]]
* [[http://msdn.microsoft.com/en-us/library/dd743458%28VS.85%29.aspx]]
* [[http://www.echosvoice.com/tshoot_video2.htm]]
The iCloud service that was announced today by Steve Jobs is very particular about music, especially music that comes from Apple's own iTunes store. Here is a summary of the main features of iCloud regarding music files: 
# Music files that purchased from iTunes don't count towards your disk storage quota
# Music files that purchased from iTunes can be stored for free
# Music files that did not purchase from iTunes can be matched for a yearly subscription
# Music files can be uploaded very quickly
These points reveal a few interesting clues about the technology behind iCloud. 

''iTunes files are not uploaded''
First of all, iTunes files don't really need to be uploaded on the cloud at all, since they already live on Apple's servers. The iCloud client simply acknowledges that you have such files on your computer and keeps a record of this fact on the cloud, so that you will be able to download the file again when you need it. This also explains why music does not count toward your storage quota.

''Matching is a form of de-duplication''
We already established that handling iTunes files is very simple and fast, because iCloud does not need to upload the file at all. If you have instead a song that you did not purchase from iTunes, but the song exists in the iTunes library, iCloud will likely try to match your file with the corresponding song in the iTunes catalog and again avoid uploading large amounts of data. Not surprisingly this feature is called //iTunes Match//. This technique is also called de-duplication and is used by other specialized online storage services, such as [[Gobbler|http://www.gobbler.com/]], to save both bandwidth and storage.

iTunes Match raises some clear licensing issues that may explain the pricing model offered by Apple. Ideally, someone could download a low-quality pirated version of a song, let iTunes do the matching, and download the high-quality version back from iCloud, which is now an iTunes song file. Record labels will not like that very much. So, by charging users for the matching service, Apple is simultaneously deterring people from pirating songs, and covering possible licensing fees that Apple may have to pay to keep record labels at bay.

A more interesting question is how the matching works. It may be a combination of using the ID Tags in your music files and the kind of audio hashing used by services like Shazam and SoundHound.

''Why all this charade?''
Other services of Digital Delivery like Steam for games, keep track of the items that you purchased and let you download them as many times as you want. Why doesn't iTunes do the same? If iTunes simply allowed to easily download the files you purchased, this charade about uploading yet not uploading songs would not be needed.  In fact, iTunes clearly knows about the files that you purchased, and you can even see a list when you view your account. The problem is likely that enforcing copyright for music files is more difficult. A service like Steam not only manages your purchases, but it is also a platform that enforces a DRM on the games that you download. On the other hand, iTunes ditched (at least partially) its old and obtrusive way of protecting songs with DRM, so you can play iTunes songs wherever you want now. As a result, if users were allowed to download all their purchases easily, they may simply go on a friend's computer log in with the iTunes account and deliver a copy of their library to someone else. 
Sony recently released Move, its motion sensing solution for the Playstation 3, in an attempt to expand its market to a potentially very lucrative audience of casual gamers. Yet the Move not only failed to garner the early traction that it needs, but it also appears to be doomed from the start by a number of other factors. To make matters worse for Sony, the Playstation Move went largely unnoticed, overshadowed by the much more compelling and innovative Kinect launched shortly after by Microsoft.

The most obvious problem with the Move, is that it is almost a blatant clone of the Wiimote. It has the same accelerometers, gyros, and it feels exactly like the Wiimote as far as gaming is concerned. The only real technical difference is that the Move places the light emitter on the controller and the sensor on the TV, but again this is conceptually almost the same as Nintendo's solution. Moreover, unlike Wii's cheap infrared diodes, the Move relies on visible light requiring a full-fledged RGB camera for sensing, which translates to a higher sticker price for consumers. Sure, the Move is in principle more accurate that the Wiimote, but motion sensing in games is about registering large active motions, not doing microsurgery! One advantage of Sony's approach, is that the camera allows for some interesting applications of augmented reality, but what it offers is nowhere near what the Kinect can do, and it is still not quite clear how game developers can build solid gameplay around this tech. Cheap gimmicks go only so far. 

Most importantly, the Wii has been on the market for four years now and it has almost 76 million happy customers around the world -- how can Sony compete with that without diversifying itself? The major problem is that the Playstation 3 is clearly geared toward more hard core gamers, and its selection of games and especially its high price reflect that choice. A casual gamer looking for some family entertainment or a few party games to play occasionally, either owns a Wii already, or is going to pick a cheaper device like Nintendo's console with an established selection of fun games. At the same time, those who already own a Playstation 3 are not likely to shell out a hundred dollars or so to buy a device designed for more casual gameplay.

The biggest problem though is that the Playstation Move does not fit in Sony's overall business model. One established fact about the Wii and corroborated by market research, is that most Wii customers don't buy many games beyond the basic titles that come bundled with the console. That is why game developers are not so enthusiastic to develop games for the Wii anymore, despite its large install base. On the other hand, Nintendo makes a great deal of its profits from hardware sales alone. Sony instead, sells its hardware at a loss, and almost all of its profits come from royalties that game publishers pay for the games they make. Now, while the Move can help Sony sell more hardware, Nintendo's precedent suggests that it may not give the Playstation maker the bump in software sales that it needs to generate a return in investment. As I said, the Move is doomed from the start!
Visual Studio is able to generate different types of executables based on the developer's needs. The two main types are //Console// applications and //Windows// applications.
The only real difference is the entry point of the executable. The entry point of a console application is the standard {{{main}}} function and whenever you run this type of application Windows will open a command prompt to use as output for {{{stdout}}}. A Windows application instead should create and manage its own window and its entry point is {{{WinMain}}} or {{{wWinMain}}}.

To change the application type generated by Visual Studio go to the project's  property pages {{{Linker->System}}} and then set the entry called {{{subsystems}}} with one of the available options. In fact, Visual Studio can create more project types than the ones mentioned here, but those are  used more rarely.

* http://msdn.microsoft.com/en-us/library/fcc1zstk(VS.80).aspx
File in Windows have three different timestamps:
* Date Created
* Date Modified
* Date Accessed

The first two are common to all operating systems, but the last one is Windows specific and not frequently used by applications. But, there are programs that use it! For instance, Visual Studio uses the Date Accessed timestamp to decide whether or not a file was modified and needs to be compiled in an incremental build. Problems with the Date Accessed timestamp show up when using Windows software on file systems that are not NTFS using virtualization software or applications like [[MacDrive|http://www.mediafour.com/products/macdrive/]]. 
In some rare cases --likely a bug on Microsoft's part-- some of the updates listed in //Windows Updates// will refuse to install in Windows Vista. The error message asks you to log in as an administrator even if you are a user with administrative privileges. It turns out, in fact, that in Windows Vista there is a hidden user called //Administrator// that has more privileges than any other user, even users with administrative privileges and UAC turned off. This is how to resolve this problem and install the problem updates

# open the command prompt with "run as administrator"
# type {{{net user administrator /active:yes}}} to enable the //Administrator// user
# log out
# now you should see a nee user account called //Administrator//. Log in with this account
# install your updates
# restart
# log in to your regular account
# open the command prompt again with "run as administrator"
# type {{{net user administrator /active:no}}} to disable the //Administrator// user
[[Wolfram Alpha|http://www.wolframalpha.com/]] is a very interesting technology launched onto web space just over a week ago by the guys behind //Mathematica//. It looks a lot like a search engine, but in fact it is nothing like it. Stephen Wolfram prefers to call it a //computational engine// instead, but """--honestly--""" there isn't any good term yet to label what Wolfram Alpha brings to the table. When you fire up the web page, you are presented with an all-too-common search bar, but unlike traditional search engines here you won't get a static list of web hits for your queries. Just try one of the suggested search examples and you will get a elaborate //computation// of your request along with neat visualizations such as graphs, pie-charts, and all-sorts of other more unusual representations. Likewise, Wolfram Alpha won't accept the kind of generic queries that you would put into Google. Rather, you should ask questions about "anything that is computable". So, instead of "organizing the world's knowledge" this new technology tries to "compute all that is computable". This very proposition is nothing short of gargantuan. Perhaps only a great mind like Dr. Wolfram himself would dare to attempt a project this ambitious. By the way, why did they call it Wolfram //Alpha//? Is it because a project like this will be a perpetual alpha release?

I tried a good number of queries and I am quite impressed with the results that I got. Frankly, I do not expect many casual users to be impressed as much as me, but I am sure that whoever understand a bit about computers and computations can definitely appreciate the magnitude of this accomplishment.

Perhaps the greatest limitation of Wolfram Alpha is that it can only provide answers to questions that can be computed quickly. That is, computations that take more than a couple of second are terminated right away and no result is displayed for them. Sure, there is a compute cloud  behind all this number crunching, but still the most interesting computational queries are always the ones that take a long time to finish. Indeed, this is not really a limitation of the technology, but more of a practical limitation at this point. In a sense, Wolfram Alpha is a shallow computational engine for this reason, but what it lacks in depth it makes in unparalleled breadth. After all, the strength of web technologies is this very ability to aggregate large quantities of knowledge and not necessarily the task of processing this information -- which what traditional computers are built for, since their inception. 

Is Wolfram Research trying to sell computation? Not quite. For many years there has been talk of building web technologies that would provide remote compute time to customers, but this model has not emerged yet and this is arguably not what Wolfram Alpha was designed for.

The real question is what you can do with Wolfram Alpha. 
These days security is becoming one of the greatest frustrations in software. Applications ranging all the way from  Mozilla Firefox to Microsoft Word are being crippled by security measures that effectively hinder the user from performing even the most basic operations. Some companies even use //security// as an excuse for [[removing features|http://blog.us.playstation.com/2010/03/28/ps3-firmware-v3-21-update/]] now! Is it really to the benefit of consumers? Well, I leave this topic for another rant.

Here I consider how users are denied one of the most basic editing features in Word 2007 by security: copy and paste. How can you edit a document in the 21st century without copy and paste? Do people really need thousands of advanced programming capabilities, which are used by less than one percent of Word users, when even the most essential feature does not work?

Well, if you start writing a document in Word 2007 and save in the //docx// format, chances are that copy and paste will suddenly stop working. You select your text, copy it, and when you try to paste it...BAM! A message box comes up saying that macros are disabled in the document and the operation cannot be performed. In previous versions of Word, at least you would get a convenient way to enable macros and continue your work, but here even that is gone. 

Enabling macros is now a real pain and a completely irrational process. So here we go. To enable macros go to 

''Office Button->Word Options->Trust Center->Trust Center Settings''

and enable all that can be enabled. At this point you would expect that your copy and paste would work, but yet again you have to endure more misery. So let's go on

Save you file in the //docm// format for macro-enabled Word File (do we need this nonsense?)

Close and re-open Word (what the #&^$ ?)

And now finally your travail is over and you can enjoy an editing feature that was introduced to the world [[over forty years ago||http://en.wikipedia.org/wiki/Cut,_copy,_and_paste]]!

The default layout style in XCode is called "Condensed" and keeps all editor and tool windows separate. If you use this layout in your project, you'll soon have a desktop flooded with a large number of independent windows. For most people this kind of layout, which is unique to XCode, feels cluttered and cumbersome. I should also note that Apple's user interface guidelines suggest that Mac applications should be designed this way, so that users can rely on the system-wide Expose' functionality to quickly find the Window they are looking for. For instance, Photoshop CS4 on the Mac can be used this way. In Photoshop, however, the appearance of an image as seen in an Expose' thumbnail is enough for the user to find quickly what he is looking for, but for code it is not the same thing. Yet, perhaps one benefits of the default layout style is that it makes it easy to distribute windows across multiple displays, if you have such a setup.

Luckily, if you don't particularly like this layout style, you can change in XCode's preferences under the General tab. The layout style that is consistent with almost all other IDEs is called "All-in-One".
When you develop a large application, you typically want to organize your framework into a number of distinct projects with explicit dependencies between them. 

''Visual Studio''
In Visual Studio is easy to organize even very large frameworks. You first create a //solution// for your entire software framework and then you add individual projects to it. Each project deals with its own implementation files, resources, and settings. In addition, Visual Studio solutions and projects are stored as distinct files in the file system as well. In the solution properties, you can set dependencies between each individual project.

Achieving the same level of organization is not as straightforward in XCode. At least, the process to do so is not well documented or advertised. In XCode, instead of //solutions// and //projects//, there are //projects// and //targets//. A project collects all the implementation files, resources, and binary dependencies of an application, while a target is simply a set of rules to build a binary out of the items stored in the project.

Here are the basic steps to organize multiple projects in XCode the way you would do in Visual Studio:

* Create individual XCode projects for the various parts of your overall framework. For the sake of example, say we create projects //A// and //B//.
* Create a new empty XCode project and give it the a comprehensive name for your framework, say, //AllMyProjects//.
* Right click on the project icon and add a group, say,  //MyProjects//.
* Select the group that you just created and from the //Project// menu select //Add to Project//.
* Add projects A and B. At this point you should see to XCode project icons for A and B in your //MyProjects// group.
* Add a new //Aggregate// target to your project and call it something like //BuildAandB//.
* Open up the properties of the new target (option+command+E) and add the targets of A and B to //direct dependencies// (press the plus sign to do so).
* Now, you can build projects A and B from //AllMyProjects// by building the corresponding target.

''Why Targets ?''
XCode's notion of targets is actually meant to simply the organization of a project and reduce the proliferation of distinct individual chunks of software and related files on the file system. In principle, this is a good thing. Having a single project with multiple targets allows you to bundle all the code and data related to a large scale project in a single entity. For instance, XCode's approach makes it much easier to move a project from one system to the other without breaking all dependencies. This approach, however, has a few drawbacks, when you also consider some of XCode's limitations:

* Setting up a target correctly for a complex application can be tedious, but XCode does not allow you to copy targets directly from one project to the other. As a result, if you want to reuse a target from another project, you have to copy all its settings manually.

* There are several development tools, such as CMake or Qt's QMake, that can emit XCode projects directly. However, the only way to reuse these automatically configured targets in other projects is to establish direct link to them as outlined above.
When you restart an Apache server with the command:
sudo /etc/init.d/apache restart
 You may get this message if haven't configured you web server properly. To fix this error add the the following line to your {{{httpd.conf}}}:
sudo gedit /etc/apache2/httpd.conf
Now, restart the server again and you not get that pesky message anymore.
In most UNIX flavors, typing {{{ifconfig}}} in the console is enough to give you a lot of information on your network interfaces, but not all flavors of UNIX are the same. Here are a few special cases

{{{ifconfig -a}}}