‘buildNumber: unbound variable’ in Ambari Setup

I’m currently experimenting with Apache Ambari, because setting up a Hadoop cluster manually looked like no fun whatsoever. However, the Ambari project does not distribute binaries, and the freely available Hortonworks binaries required a support identifier in order to deploy a cluster. That’s a hard no on that one there super chief.

So, I built Ambari from source. That was a trial that probably deserves a blog post in its own right.

But, I got my RPMs,  installed them on my target server and tried to run the setup process;

Well, fuck.

A first look, the internet proved little help – I found a couple of people with the issue, but no helpful information. So, in my desperation, I went to the third page of a google search. I know, shocking.

And what I found was a chinese blog post. I don’t read a work of any asian languages, but a quick text search of the post led me to this;

Aha! So, I tried it (replacing the value of the variable with the correct version number, in my case), and lo;

And we are in business. So, it seems that if you build ambari-server yourself, it is reliant on a buildNumber enviroment variable. (I don not recall having this issue with the Hortonworks binaries).

Would have been nice to have that in documentation huh?

Citrix VDA Re-Registers After Every Application Launch

When implementing our XenApp 7.8 farm, I ran into a little problem. Every time we a launched an app hosted on the new 7.8 farm, the Terminal Server/Virtual Delivery Agent that hosted the connection would lose its registration with the Delivery Controller, dropping to an ‘Initializing’ state for 20 seconds or so before successfully returning to a state of ‘Registered’. This symptom would also occur when an app was closed.

The following message appeared in the Event Log on the VDA when the issue occurred – but only very rarely. Not sure why it was so rare. After turning on VDA logging, the same error could be observed in that log every time.

Our environment had three farms; one Presentation Server 4.0 farm, a pair of XenApp 6.5 farms, and now a pair of XenApp 7.8 farms. These were accessed by various Citrix Web Interface 5.4 servers (for older PNAgent clients) and a Storefront 3.0 cluster (for modern Receiver versions & thin clients via a Desktop Appliance site). Both access methods are configured to communicate with all farms. You might notice that this is quite a raft of technologies, from different eras of Citrix products. This was ultimately the source of the issue.

This issue only occurred when using Storefront to launch apps, and only on the XenApp 7.8 farm. All other combinations were fine.

After a long investigation with Citrix Support, the source of the issue was discovered. Our Storefront cluster has been implemented more than a year before the project to implement XenApp 7.8, and at the time the Store had been configured to disable Launch References, to enable Storefront to launch apps from the Presentation Server 4.0 farm. This was done by changing the web.conf for the store, setting RequireLaunchReference=”off” and OverrideIcaClientname=”on”. See this blog post for more details.

Unfortunately, this configuration causes issues with XenApp 7.8. Unlike PS4 or XenApp 6.5, launch references are required for XenApp 7.5+ farms. As soon as we changed the web.config for the store to default settings (RequireLaunchReference=”on” and OverrideIcaClientname=”off”), the re-registration issue disappeared. However, the removal of this setting does stop apps in the PS4 farm from launching.

Presentation Server 4.0 is not supported by Storefront, so I do not believe there will be a way to get apps from PS4 and XenApp 7.5+ coexisting happily on the same store. My solution for this was to disconnect the PS4 farm from our existing store and reset its settings to default and create a new store that is dedicated to PS4 apps. By disabling LaunchRefs on the dedicated PS4 store and configuring my clients to have access to both stores, I can still present all of these apps to my users with only minor changes to end user behavior. It isn’t the best solution (the best solution would be to get rid of the PS4 farm entirely, but business realities prevent that), but it will suffice to solve the immediate issues and has removed this roadblock from the project.


The Citrix XML Service at address has failed the background health check

When implementing a new Storefront 3.7 server, I encountered an issue with communication failures between the Storefront and out XenApp 6.5 farms. Intermittedly, applications from these farms would not enumerate. The following events were logged in the event log repeatedly, indicating transient connectivity issues;

The Storefront server had been built following Carl Stalhood’s excellent Storefront build guide, which includes several non-default configuration recommendations. As I was not experiencing the issue on our existing Storefront 3.0 servers, I was led to thinking that perhaps one of these changes was the source. While researching the issue, I found a Citrix support forum thread in which a user recommended turning off socket pooling in order to aid in troubleshooting the connectivity issues, which set me to thinking.

A quick look at my config confirmed that, yes, socket pooling was enabled as per the build guide recommendation. In addition, in the Event Log there were messages that I had overlooked before;

Aha. Something odd is going on there.

So, I disabled socket pooling in the settings of all the stores configured on the Storefront sever. This has caused the messages above to stop being logged, and I have not have the symptoms reoccur since, so I believe the issue is solved.

Obviously there is something not quite right happening with socket pooling and communication with XenApp 6.5 farms – but our environment is not large enough for socket pooling to be required, and thus this is a good enough solution in my case. If you have this problem and require socket pooling to be enabled, I suggest opening a case with Citrix Support for investigation and proper resolution.

Weblogic SSL and Google Chrome

During our implementation of a new JDE Oneworld (Enterprise One) environment, we encountered an issue after enabling SSL on our web instances. Internet explorer was quite happy with the configuration, but attempting to load the page in Chrome resulted in an error:

‘SSL Server probably obsolete.

A quick search revealed that this meant that the server was willing to communicate on SSLv3 (which is a huge issue due to the POODLE vulnerability). So we needed to limit what SSL/TLS versions the server was using – more specifically, we want it to only use TLS 1.2, as both SSLv2 and SSLv3 have major vulnerabilities and all our clients are modern enough to support TLS 1.2 (so why use anything older?).

More canny googling also revealed the solution. We needed to add a new startup argument to the web instance;


This would force the webserver to use TLS 1.2, and not allow older SSL or TLS security types.

server start

Unfortunately, after applying this configuration and restarting the web instance, the error remained. It took quite a lot of frustration and more than a little Oracle Knowledgebase diving before we stumbled on what we had missed.

In order to use the -Dweblogic.security.SSL.protocolVersion  argument, you must be using JSSE SSL. This was not enabled by default on our web instances (which had been created automatically during the JDE install process). This setting lives under General -> SSL -> Advanced.

jsse ssl

After enabling ‘Use JSSE SSL’, saving and activating the configuration, and restarting the web instance, the error disappeared.

Adding a new ODBC Linked Table TableDef in MS Access

Yesterday I was given a brief to create a small tool that would update the ODBC Linked Tables in various Access databases. The catch: the database to which the linked table refers is moving platform (DB2 to Oracle) and has a new schema name. The former is easily manageable using the built-in Linked Table Manager in Access, but the latter is more difficult – the SourceTableName (called the ForeignName in Access’ internal schema) cannot be changed on an existing Linked Table, even programatically. Thus, my only option was the delete and recreate each Linked Table definition with the proper values.

Easier said than done. While attempting to append a new Tabledef to an access database, I encountered a rather vexingly obtuse of error;

This error occured while attempting to Append() my new table definition on to the database. Google indicated that either my ISAM drivers were corrupted, or my connection string was wrong. Well, I’m trying to use ODBC not ISAM, so I assumed that the reference to ISAM and any related driver issues were a red herring.

Below is the C# code snippet that generated the error. (It has been simplified, removing many of the form-specific elements.) It is in essence reading a list of TableDefs from a list box (where their names had been previously populated, allowing the user to choose which ones to update) and then attempting to create a new TableDef with the same name but a different connection string and SourceTableName.

The connection string for tbdNew I drew from Access’ internal MSysObjects table – I created a new linked table to the new location, and took a look at the resulting object record;


This of course, is what led me astray. The Connect value in the table above and the TableDef.Connect property of a TableDef object are related, but not identical. The connection string in the code snippet above isn’t complete – I needed to have ODBC; added to the start (as below). I discovered this by examining the TableDef of the new Linked Table and noticing that the Connect property did not match the Connect table value above. Once I made this addition, everything started working fine;

After running the code and examining the resulting Linked Table objects in MsSysObjects, I could see that ODBC; had been trimmed from the front, bearing out my theory. I suspect that without the ODBC; tag to tell Access that it is meant to be an ODBC linked table, it by default assumes that you are connecting to an ISAM data source, and starts looking for a driver – which it of course cannot find, leading to the ISAM error above.