Thursday, December 29, 2011

Last Modified Date Rounding to Whole Seconds

The DateTime class in Salesforce has precision down to the order of milliseconds, as implied by the getTime method which "returns the number of milliseconds since January 1, 1970, 00:00:00 GMT represented by this DateTime object." Since the Last Modified Date field is a Date/Time field, one would assume that the Last Modified Date includes the millisecond value when stamping a record, right? Alas, this is not the case.

A video demonstrating the behavior: http://www.youtube.com/watch?v=IyRd1woiS24

It seems that Salesforce only includes the year, month, day, hour, minute and second when stamping the Last Modified Date. The millisecond value is simply zeroed out without any rounding. So, if a record was actually modified at 12/29/2011 12:31:20.578, Salesforce will stamp the record as being modified at 12/29/2011 12:31:20.000.

Okay, so there's a tiny discrepancy. I mean, we're talking tiny differences here. The million-dollar question: Why do I care?

I care because this little quirk caused my unit test to fail over 80% of the time due to the fact that comparing the Last Modified Date to DateTime.now() was producing unexpected results. For example:
  • At the beginning of the test method, DateTime.now() returned 12/29/2011 4:00:01.500.
  • At the end of the test method, the updated record's Last Modified Date shows 12/29/2011 4:00:01.000.
  • System.assert(record.LastModifiedDate >= testStartDateTime) fails unexpectedly and inconsistently.

The oddity with Last Modified Date is not an impossible problem to work around, but should I really be working around it? What does this mean for test-driven development that Apex is supposed to enable?

Thursday, December 22, 2011

Org ID Automatically Replaced in Sandboxes

I discovered something that made my blood run cold today: The OrganizationInfo.isProduction method I was relying on in Apex to communicate to the correct web service endpoint was returning true in my sandbox orgs.

My OrganizationInfo class is super simple, created as suggested in a comment on the IdeaExchange (Determine sandbox name vs production from apex). Shown below for reference:

public class OrganizationInfo {
    
    /**
     * The Organization ID of the production org.
     */    
    private static final Id PRODUCTION_ID =
            '00DA0000000Kb9R';
    
    /**
     * Determine whether the current org is a
     * production org or a sandbox org,
     * based on whether the Org ID matches the
     * production App org's Org ID.
     *
     * @return Whether the org is Production
     */
    public static Boolean isProduction() {
        return UserInfo.getOrganizationId()
                == PRODUCTION_ID;
    }   // public static Boolean isProduction()
    
}   // public class OrganizationInfo

The obvious question: How does something so simple fail in a sandbox org?

The surprising answer: When a sandbox is created or refreshed, Salesforce automatically does a search and replace and replaces the production org ID with the sandbox org ID. Once a sandbox is created or refreshed, a single line of code changes in the above class:

    private static final Id PRODUCTION_ID =
            '00DZ000000056dv';

This tiny, almost unnoticeable change has been screwing everything up for a long time, and its discovery also explains why we would periodically get strange data in production and also strange responses in our sandboxes.

I wish I had known about this earlier, but who would've guessed? Anyhow, the fix that I've implemented (and confirmed by creating a new sandbox) is to split the ID into 3 parts when assigning it to the constant.

    private static final Id PRODUCTION_ID =
            '00DA0' + '00000' + '0Kb9R';

Amazing... the things one discovers in the worst possible ways...

Discrepancies in Reports and Report Actions

I learned an interesting thing about reports today: What's displayed in a report can be different from what gets exported or from what gets added to a campaign. Let me try giving an example to clarify my statement.

Here's what we expect to happen:
  1. Run a Leads report.
  2. Review the data on the report. Let's say that the report shows 3 Leads.
  3. Click Add to Campaign. The 3 Leads we saw are added to a campaign.

Here's what could actually happen:
  1. Run a Leads report.
  2. Review the data on the report. Let's say that the report shows 3 Leads.
  3. Click Add to Campaign. Four (4) Leads, not 3 Leads as we just saw, are added to a campaign.
  4. Return to the report. 4 Leads are now shown, instead of 3.
  5. Click Export and complete the export. Five (5) Leads, not 4 Leads, are exported

The cause of this behavior is that when an action is performed on a report, namely Add to Campaign or Export, the report is run again in the background to as part of performing the action. In other words, what we see on the report before we export or add to a campaign can be thought of as a "preview" of what would happen after we perform an action.

In most cases this is probably a non-issue, but I thought the phenomenon was worth noting in the off case that someone is perplexed by an odd discrepancy between a report and a campaign member list or an export file.

Sunday, December 18, 2011

ApplyYourself Hack to Mass Update Choice Group Values

How annoying is it that there is no easy way to mass update choice group values in ApplyYourself? All mass updates have to be sent to an account manager, who then uses some magical tool to make the changes that should've taken 1 minute for an administrator to complete.

For a change that I needed to make immediately, outside of business hours, I had to come up with an alternative: ChoiceGroupUpdateHack.js

When this script (making sure the code is prefixed with javascript:) is entered into the address bar and executed, a small textarea element is created at the top of the page along with an Update Values button.


To use the hack UI, all one has to do is paste new values directly from Excel into the textarea element and then click the button!


The Excel spreadsheet should be formatted as it is exported from ApplyYourself. The spreadsheet should include the following columns, in order:
  • Choice Value
  • Choice Code
  • Choice Order
  • Header: "Yes" or blank
  • Related Value
  • Inactive Date: MM/DD/YYYY

Although this hack took 3 hours to develop, the ability to mass update choice groups on demand autonomously is priceless to me.

Note: If ApplyYourself starts throwing bizarre errors that don't make sense, another hack may be necessary to clear the choice group before loading the new values: ChoiceGroupClearHack.js

This hack was validated in Safari 5.1.2 and in Google Chrome 16.0 on Mac OS X Lion.

Friday, December 16, 2011

ApplyYourself Hack to Use Free-form Text Filters for Choice Group Fields

I discovered an interesting bug in ApplyYourself that makes it possible to do something that should've been standard functionality: Setup normal text filters using the Contains operator with Choice Group fields.

Imagine trying to get a list of all records that have Program value containing the word "Bachelor" when your Program field is setup as a Choice Group with over 100 options. The standard query interface forces you to use the following filter:
  • Program In this List ... (manually selecting every singe value using the tiny 3-line picklist)

Or, you may have smartly added "Bachelor" as an extra value to the associated Choice Group so that you can select that single value when using the Contains operator.

However, both of these methods are annoyances. What if I wanted to query something on the fly with a value that I haven't predicted to need before?

The hack workaround or solution is simpler than both alternatives:
  1. Setup the filter with the desired field and the Contains operator with any value at all.
  2. Click save and run.
  3. Click the Back button in your browser, not in ApplyYourself. The picklist will have now magically turned into a free-form text field!

Monday, December 12, 2011

Problem with Surveys Asking for "Agree" or "Disagree" Using Radio Buttons

I filled out a short survey today (which I appreciated for being short) that asked me to assess my satisfaction for an event I recently attended. What's notable about the experience was that just before I was about to click the Submit button I decided to check my 3 responses one more time. I was very glad I checked, because I realized that all of my responses were the exact opposite of what I intended, and so I corrected each response before making my final submission.

The problem: I had selected "disagree" for every item with which I agreed and vice versa for ones with which I disagreed.

How often do people encounter surveys that list a bunch of statements and then give radio buttons for indicating one of the following (or similar) sentiments:
  • Strongly agree
  • Agree
  • Neutral
  • Disagree
  • Strongly disagree

I think that based on each person's individual experiences, the person may assume that "Strongly Agree" either always falls on the left or always falls on the right side of the response matrix without pausing to actually read the survey. If the respondent is in a hurry to complete a survey, are the differences between the following two screenshots really all that apparent?



To reduce the chance of survey results being invalidated by responses that are completely wrong because the person intended the response on the other end of the spectrum, picklists may be used in place of radio buttons. I'll outline a few reasons why I think picklists are the better choice.

1. Picklists force people to read.

When a user is confronted with a picklist that starts with an option like "--Select a Response--", he or she must read all of the picklist options in order to pick the right value.

2. Keyboard shortcuts make picklists more usable.

Imagine a standardized picklist that has the following options:
  • Agree
  • Agree, Strongly
  • Disagree
  • Disagree, Strongly
  • Neutral

When a user tabs to the input and when the input has focus, the user can:
  • Press A once for "Agree";
  • Press A twice for "Agree, Strongly";
  • Press D once for "Disagree";
  • Press D twice for "Disagree, Strongly"; or
  • Press N for "Neutral".

In my mind, this makes surveys easier to fill out in less time, which should increase the response rate by telling users that a survey would only take 1 minute instead of 2.

Friday, December 2, 2011

Streamlined Login for MediaWiki

I love MediaWiki overall, but I cannot stand the way the login prompt works when an unauthenticated user tries to access a protected page.

The tedious process is basically as follows:
  1. User clicks a link to a protected page. A page with "duh" instructions is displayed, forcing the user to click a link to get to a login form.
  2. User clicks the link to get to the login form. Why doesn't the login form automatically give focus to the Username field?
  3. User clicks in the Username field just to start typing in a username.

The process really should be as follows:
  1. User clicks a link to a protected page.
  2. User immediately starts typing a username.

So, to make it easier for users to like and adopt our MediaWiki instance, I customized two files: /includes/OutputPage.php and /includes/templates/Userlogin.php.

In case anyone wants to easily copy the code, I've uploaded my notes on this hack. Now I'm finally ready to start publicizing our MediaWiki internally and getting people excited about it!

This hack was tested on the following browsers in Mac OS X Lion:
  • Safari 5.1.1
  • Firefox 8.0.1
  • Chrome 15.0

Note: This implementation causes the instructions page to fully load before JavaScript redirects the user to the actual login page. Any suggestions on how to skip the loading of the instructions page altogether will be much appreciated.

Note: grep -R alone is a significant reason why Linux- and Unix-based OS's (e.g., Mac) are so much better for developers than Windows, out of the box.