Category Archives: Tech

Gmail’s New Select All Interface

OK, so this is pretty clever. Clever enough that I’m posting about it Friday night after work when I could be out doing things in the city.

Previous E-mail Options in Firefox

As the caption reads, this is a screenshot for Gmail in Firefox under the older interface. In order to select all e-mails or certain e-mails, you can click on “All” or another text option.

Standard Select All Check box

This is what you normally see around applications and websites to select all. It’s an accepted convention, so regular users know that by clicking on the top check box, all check boxes in the list will be selected.

New E-Mail Options in Chrome

This is what the current e-mail options area looks like. Sorry for using a Firefox screenshot above and a Chrome screen for the new UI. The select all box looks familiar right? Wait, what’s that? There’s a drop down option next to the checkbox?!

New E-Mail Options in Chrome - After Drop Down is Clicked

This screenshot shows what happens when you click it. As I said, this is some clever stuff. It goes from the clunky, non-standard convention of clicking “All” to clicking the check box if you want all items in the list. This is a good thing as general users are accustomed to the convention of clicking the check box to select all options. That in itself is good enough. If power users want to select a specific subset (e.g. Unread, Starred, etc.), they can easily do so with the drop down option.

What are the pros and cons of this change?

Off the top of my head:

Pros

  • Adopts common Select All check box convention
  • Reduces UI footprint (pixel real estate)
  • Simplifies the UI by presenting less options up front
  • The ‘None’ option does not break convention since you can Unselect all by clicking the check box a second time

Cons

  • Power users now have to click twice when they only had to click once before. (Before they could simply click on ‘Unread’)

Google Reader Stats

Via TC, I found out about a minor Google Reader update.

If I recall correctly (when I last looked at the Trends section ages ago), Google Reader only shows your last 30 days activity on the date that you are checking it. Now, it shows your lifetime activity but caps it at 300K for read items. That’s a shame as it would be nice to see what stats their DB has.

I read RSS as a good way to stay on top of many diverse websites for news coverage. RSS has improved the way I get news since I started using it in 2008. Yes, I am late to the party, but RSS is a good tool. I feel like I’m playing a game of Inbox Zero with Google Reader.

Totems and Document Authenticity

*minor spoilers about the film Inception*

In the movie Inception, I was introduced to the concept of a totem. Your totem is a device that only you can verify as authentic. Another person who has knowledge of your totem would not be able to fully reproduce your totem. For example, a loaded dice will always fall on a certain side due to the unbalanced weight. Even if another person knew your totem was a dice, they would not know that it was loaded unless 1.) you told them the secret attribute or 2.) they were able to get a hold of it and reverse engineer it. When a totem is properly kept secret, it is useful as another individual cannot properly reproduce it.

What occurred to me randomly was that you could use a totem-like system amongst individuals to verify document authenticity. This may or may not be used in the real world. By definition, I wouldn’t know about another individual’s totem in the real world. For instance, a spy agency could use a totem on classified documents, the kind you see in movies stamped ‘Confidential’ and tucked away in a dossier. Instead of having a generic template with the agency letterhead, a totem such as an image or uncommon pattern could be incorporated onto the template. This way, people who are “in the know” could verify the authenticity of high-level, confidential documents by checking if the unique image serving as the totem was on the document. If the document was forged and put forward as real, the missing totem would help disprove the document’s authenticity.

To an extent, every document already has a distinct profile but not necessarily an explicit defined totem. With a hard copy printout, the type of paper, the ink used, etc. would help narrow down the document authenticity. Company specific letterhead helps to a degree to authenticate documents. With soft copy documents, file names, size, types, etc. could help narrow down the electronic origin. The distinction is that totems would not be used for *all* documents within an organization. A totem would be reserved for high-level, sensitive board minutes or signed agreements in order to maintain a level of security by obscurity (or the effectiveness of the totem).

Another use of totems, beside the purpose of preventing forgery, would be to incorporate version control. A totem could be dynamically generated as part of a electronic document system to provide version control. Besides using an image to designate the totem, text could be used that may seem out of place. Or text that doesn’t seem out of place to the untrained eye. To provide an example, let’s say you have 3 members of management working on a document updated with constant revisions. By placing the text “Tigers500,” a completely arbitrary selection of text, on the document, only the 3 who worked on the document would be able to verify the document as authentic. An individual outside the 3 original members would have no way to include the text “Tigers500” as part of a forged document. The next version of the document could include “Tigers501” and so forth.

Totems bring up many interesting use cases. I’ve talked about totems in a social setting, whereas Inception utilized it as a personal effect. One potential pitfall of a totem is that if your totem is compromised and “found out,” then you may be relying on blind faith when presented with a false totem.

URL Duplication

Let me first off start of by saying there may be “the right way” to do this, but I have not found it yet. A site such as Digg would have advanced techniques to deal with this.

In my given scenario, a user enters in a URL and results are presented. Ideally, each unique URL only shows up once in the database. That way, no matter how the URL is entered in, the user ends up at the correct landing page unique to that URL without duplicated entries in the database. Duplicate entries make the system less efficient as time is spent on the multiple instances of the URL as opposed to just one location.

For example, I may enter in google.com and another person http://www.google.com/ for their input. These two cases refer to the same intention, but they are not the same. With many websites opting to remove the www. in front through server-side scripting, this can start to get tricky. The front elements typically include http:// and www.

Some websites use a subdomain and do not accept a www. in front. For example, http://ps3.ign.com is one such example.

Another example are appended modifiers. With a URL such as http://www.nytimes.com/2010/01/09/nyregion/09gis.html?partner=rss&emc=rss the ?partner=rss&emc=rss is not necessary for a user to view the site. This can cause duplication in the database.

Unfortunately, I assumed that duplicate entries are inevitable and a fact of life. As such, my goal would be to prevent and fix duplicates – not to eliminate them entirely.

The way that I addressed this was to do a lookup of variants of user input. Extra energy spent? Yes. Duplicates reduced? Hopefully.

The preventative measure:
So for an input, I would concatenate several strings and check for matches. I had the script put a mix of http://www. in front, http:// in front, and / in back. These were all ran through matches with the relevant table column. If any of them returned a positive, exact match, I would route the input request to accordingly.

The remedial measure
To deal with a duplicate entry in the table, I created an additional column in the database. By default, the redirect value is null. If the value is set, I would have the routing script redirect there upon any requests to go to the duplicated page.

With any given URL, duplicates are highly likely. Take for instance that with any URL, the URL itself may have several variants that are valid for use (google.com VS http://www.google.com). Also, many pages have appended $_GET values (such as ?partner=rss&emc=rss). Then, the recent mass resurgence in URL shorting services (bit.ly) add another layer of URLs that all redirect to the net same page.

It seems to me that duplicated URLs in a data set where each URL is intended to represent a unique page is inevitable given a large enough collection.

I Like This

Heart written in sand

On the internet, there is a commonly implemented feature, the LIKE link (or button). For example, in Facebook, one can click LIKE on a news event. Also in Google Reader, one can also click LIKE on an RSS item. The LIKE feature is widespread on the internet and is a good way to quickly gauge audience reception. Note that there is usually no DISLIKE button, most likely due to the fact that the internet can easily turn into a hate filled place and a DISLIKE button would encourage negativity.

However, what exactly is the user indicating that they like?

Does the user like the news event? Does the user like the author’s writing style? Does the user like the image used? There are endless scenarios that could lead a person to click on LIKE. Websites, in the interest of fostering community and continued page views, do not care enough to distinguish this. Maybe it is enough for the website that the user has seen the content, reacted positively, and clicked LIKE.

It’s possible to increase the clarity and usefulness of the LIKE feature. For example:

  • Why – This is the reason that the user likes the item. It could be a category drop down box, a comment box for elaboration, etc. This would make the LIKE feature much more complicated to use and present to users, but it would make it clear what the user actually likes.
  • Dislike – As long as users can LIKE things, it’s always possible that they may instead NOT LIKE things and thus DISLIKE them. By being honest with your users, a website can allow users to indicate what items they LIKE and DISLIKE. If DISLIKE was implemented, it would be important to keep it constructive, if such a thing is possible on the internet. It would be preferable to keep things civil by letting the user express why they dislike it, and try to prevent this from turning into personal, anti-user sentiments.
  • Tracking – On Twitter, a person can view Favorites. Favorites are akin to LIKED items. On websites such as Google Reader and Facebook, to my knowledge, a user can’t retrieve a history or listing of what they have LIKED.

The LIKE feature is a welcome addition to the social web. In addition to sharing or commenting, LIKE is another way to indicate relationships between individuals and data. There is a lot of room for improvement with the LIKE feature into something more robust and defined. Right now, the LIKE feature has been used by many sites as a way to maintain feature parity and increase user engagement.

Adobe Burnsauce

This comment on a Palm developer’s recent post is particularly vicious and hilarious:

Whoa, hang on a mo, Palm hired someone from Adobe to design installers….?

It is worth noting that the author’s response is a good save.

All this is since Adobe makes you jump through complicated, unnatural hoops to install it’s software.