The not very comprehensible world of minimalist weather apps for your brand new smartphone
Have you ever noticed the existence of the weird world of the minismalistic weather apps for iOS? Look at these:
The antepenultimate is web based. The remaining two, Android counterfeits. And… that’s not it! Some others are still developing or have presumably died as designs:
http://dribbble.com/shots/576362-iPhone-Weather-App http://dribbble.com/shots/553778-thermometer http://dribbble.com/shots/665983-Simple-Weather-App http://dribbble.com/shots/559715-Minimo http://dribbble.com/shots/576362-iPhone-Weather-App http://meer.li/designs/color-weather http://www.workbymark.nl has one too!
How many of them the world needs exactly?
Are they simple design exercices (even though some are pretty bad)?
Poll: Do you use them?
Consistency, where are you?
I am nowhere near to be an expert UI/UX designer, but I like being aware of this stuff as much as I can, even if it does not have anything to do with my PhD. Why am I telling you that? Because in this post I will throw a simple question about UI/UX. Please notice that I am trying to learn more than to criticise
How have I got to write this post?
Yesterday I (as maybe many of you have) installed the new OS X: Mountain Lion. That increased my need to install iOS 6 on my antique iPhone 3GS to test iCloud synchronizations. I was reluctant to do so because it is currently a beta and this is the only phone I have for developing and for its everyday use. But I somehow enjoy having problems with technology: I installed it. My surprise? They work like a charm, but I have found a pair of UI inconsitencies I would like to discuss:
1) If I would had crafted the new iOS Safari, would it have been rejected from App Store?
Look at the two images below. Apple have just revamped sharing UI in Safari (and others across the OS). Now, instead of some big gray buttons with the available actions, Apple have revamped the UI to show colorful icons to present your choices.
I literally quote a phrase from iOS Human Interface Guidelines:
Do not reuse iOS app icons in your interface. It can be confusing to users to see the same icon used to mean slightly different things in multiple locations throughout the system.
I agree, seeing that icons I expect switching from Safari to the respective app, even if it is to perform the sharing operation. However, right now you will not open the app tapping on them. It is not a big confusion, but maybe it is not the best solution, is it? At the same time, I wonder how many apps have been rejected in the past for doing that….
2) Consistency between platforms is one step closer, but are they exploiting its full potential?
I will take a very simple example from the new OS X Safari “Show all tabs” feature vs iOS implementation. iOS version looks as follows:
Otherwise, OS X version looks like:
As you can see, the icon to start this view and the button to close the tab are completely different in both platforms. I am aware that the desktop platform does not need a big button because you will not have to tap it, but I think iOS abstraction is smartly engineered, by its color mainly ;)
Going one step further, the desktop version enters the visual navigation of tabs with a pinch in the trackpad. Why it is not identical in iOS? I am quite sure that there is no limitation and it is a useful shortcut!
Maybe I am beeing a little picky, but I am sure that these things would improve the consistency and predictability of the interfaces. Besides, they would put together a perfectionist ready system.
As always, I look forward for your comments!
No matter how, I want to add that OS X Mountain Lion have surprised me since the very first minute. I think that there are many improvements in UI fluidity. And the new Safari is a rocket! However, my favourite feature is in iCloud synchronizations, it is starting to get into a better shape ( e.g. the notifications of iMessage between iOS and OS X are synched in real time). However, I am very hard to please, and I think there is a long way to go…
On Facebook’s Initial Public Offeringhttps://speakerdeck.com/u/fbeeper/p/on-facebooks-initial-public-offering
As you can see above, I’ve attached a link (embedding doesn’t work) to a simple keynote I presented today as part of my Computer History elective course. The theme of the day was “today and tomorrow”. Therefore, it is not focused on history!
Please notice that I’m nowhere near to be an expert on this topic. It has been thought to organize ideas, and to promote discussion among the students about the unquestionably hot topic of Facebook initial public offering. Henceforth, I bring many questions, and not so many answers to the open arguments on this matter. However, I hope you can enjoy it. Feedback will be appreciated :)
*You will see that it has many light cyan rounded boxes with comments to clarify some stuff I said in class but wasn’t on the original keynote. Sorry for the clutter, I prefer my presentations with infinite less text :P
Would you please improve my email?
I have my blog quite abandoned… the truth is I’m quite focused on my PhD, that’s a good sign, isn’t it? But, today the results aren’t being good with me, so, let’s write something, I’ll keep it short though :)
I have got a really mundane idea in the privacy of my skull for a while. It is related with the way we communicate through Internet via email. I’m not very happy with this “technology” as it is.
I have to admit that it is an extremely powerful communication medium. It has the power of a white piece of paper. I can do whatever I want (almost). However, all this power cannot be controlled as I would like to.
I will summarize my frustration through some random examples:
- If I have several questions for my interlocutor, I like them to be answered. However, every day we have to confront with a number of diagonal readers. I’ve gained extreme writing skills to deal with these people… but, why the hell I can’t do it in a form? They’ll need less attention, I will set the rules for an easy interaction…
- Is there a big difference between taking synchronous or asynchronously? I sincerely think it is not. Nearly all the times the topic is what makes the difference, not the synchronization modality. Facebook is the first to blur the edge between sync/async. I think it’s not perfect, but shows my point.
- Organization is pretty rigid and old fashioned, don’t you think? Folders? Labels? Stars? Is it enough? Sometimes I would like to do my own thread summaries, take notes, highlight text and attachments by relevance… everything on my client, not on my paper notebook/Evernote.
- Search is awesome, but it actually isn’t… why? I can’t find some obvious things… maybe with better organization tools I would overcome that?
- Lack of (real) integration. I dare to say that most people receive most of the RVSP to events and meetings through email. Is it that difficult to integrate mail with organization tools (IMHO, another kind of weak SW nowadays). Apple integrated some of it looking for dates and times in your text, Google added event sharing… but, again, I feel that it is a really poor integration.
I’m not the first to say this, not even to try to find a better way to communicate… Yet, they have failed. My last hope was on Google Wave (just before they presented it), but they failed even more loudly than the rest. Why? I think they wanted to left behind the email… but email is an awesome tech by its simplicity and penetration. Contradictory, huh!?
Computer Science Literacy
I am about present some obvious reasonings for many people. However, I want to give those ideas some structure. And, maybe to open an interesting discussion with those who do not exactly agree with me, or haven’t thought about it at all.
If you are reading this blog I am quite certain you will agree with me that the world will not only change in the future. It has changed already. But, sadly, we (take that as a world-generic “we”) are not perfectly adapted to it. Undoubtedly, the main engine of this immediate change is in our ever growing interaction with computers. Is not only that we want to be smarter and more efficient each minute of our lives. The world is pushing us too. Computers help us to get there. In communications, and therefore in many social interactions. In information retrieval, generation, and distribution. Even in entertainment. But this situation is neither that clear nor easy to deal with for everyone. I’m almost uniquely surrounded by technologically immersed people, but when I think/interact of/with other people without this inmersion I realize that some could get awfully lost.
I am confident to asseverate that Computer Science has become as important, if not more, as other essential kinds of knowledge. Ask yourself if it is more important to write with orthographic correction, or to know what is safe while making an online payment.
Many people (an scary amount) does not know neither the risks nor the benefits of having a stronger technical knowledge. I am not only talking about security concerns as the previous example. I am talking about the possibilities (and the problems that can, and will, arise) from people interacting with a bast variety of computers and computer-related topics: taking photos on a trip, creating Facebook pages, leading the absolutely indispensable and ever changing project of their workplace’s webpage, connecting a new device to their private wifi networks, sharing and getting files from any network…
I think that it is too much for the current (or may I say, inexistent) CS education of the population. People get frustrated when things do not work. People can get fooled by their IT technicians. People may not see, and as a consequence cannot exploit, the real possibilities of many computing solutions. And most worryingly, they can get involved on really dangerous security problems. Some of the current situations will hopefully fade away, but new CS-related problems will definitely arise. I am sure that this is nothing that the alleged “digital natives” will overcome with their nativity. They can do best than the elder ones, but not even close to perfect. In other words, they will be able to write but with annoying orthographical misspellings.
Hence, I support that we need to incorporate a thorough computer science program to all levels of education. Even in the worst case scenario I cannot see anything wrong happening from people knowing rudiments of how computers, programs, networks, and security systems work. I can only see benefits of spreading such knowledge.
Going one step further, I think that someday CS will not necessarily exist on the university programs as we know it today. The pace of time, and the wider adoption of basic CS knowledge will segment this area of knowledge. As from ancient Science emerged specialities as Mathematics, Biology, Physics, etc, from Computer Science will likely emerge new major areas.
Software as a piece of virtual hardware
Some weeks ago @mattgemmell wrote a fairly interesting article (and short, just in case you want to read it now) advising to write software as a piece of virtual hardware. I kind of like this idea even though the article is not very clear for me. But, it had me thinking for a while and here are my thoughts:
It is straightforward to see that an iOS app has the mission to convert the underlying device into something more specific: a calculator, a magazine, a notebook, etc. iPhone/iPad are no more than digital electronic swissknives. It entails that the application needs to cover the complete experience. Depending on other apps is utterly unacceptable. Additionally, these new hardwares are likely to be composed by some common pieces. It will definitely help to understand what are we doing in this purpose-shifting environment. But, the most important thing seems to be that we have to forget old and different abstractions of software (i.e. imported from other platforms) that no longer apply. Thinking of software as a piece of hardware would help to do that, for sure. That is because we are getting rid of many temptations. Simple!
However, thoroughly analyzing this concept I can see that we can take it a little further. I am thinking in terms of the inherent “staticity” of hardware. Hardware, because of its physical nature, is restricted on things that software is not. For example, in the placement of buttons, plugs, screens, etc. That is what makes software more versatile, but also the thing that could lead to worsen the user experience.
The expert and hardcore users (kind of experts too) appreciate highly configurable environments. So, they are interested in “pure software.” They set-up everything to obtain an extremely efficient operation. In many ways, we got used to that. It is a personal tuning that requires many hours, interest, and expertise. Though, the common user is not necessarily using computers because he wants to. Actually, he could be a phycologist, a journalist, a veterinary… and he does not have time to deal with computer science. He wants to be efficient and successful in his area. Thereby, I understand that they are more interested on a piece of “hardware” to help them solve their problems. So, would not it be more interesting to try harder to do that tuning work for them? Sounds limiting, but it could be quite the opposite. Again, the conclusion seems to be “write software as a piece of hardware.” However, we will have to bear in mind that if we do that we are in charge of a great responsibility. But, it is cool if we turn backwards a famous Spider-Man quotation: «With great responsibility comes great power» :D
Likewise, the aforementioned hardware “staticity” principle will interestingly apply to any visible software changes/updates. That includes moving features, but also adding them. If you leave space for a future feature, your user will be waiting it (maybe a little frustrated meanwhile) and he will take it smoothly. Otherwise, he will not like the change, even if it is good. It is like if he had to change his physical device. He feels forced to learn something new. It sounds painful. Not to mention that this sounds nicely connected to my past lucubrations about “teaching the client” ;)
It is too easy to criticize computers
A few days ago @abraham_martinc said to me that I’m getting more Apple fanboy. It was an observation made after this retweet:
@fbeeper, Nov 12: Que fàcil és criticar… RT @newsycombinator: Tim O’Reilly: I am really starting to hate Mac OS X. j.mp/uNy7IP
* English translation: It’s so easy to criticize…
Clearly, I did not expressed what I had in mind. I’ve got the feeling that people are really used to complain about technology. People who can see (and express) the bright things coming in the future seem almost extinct to me.
A nice example: I don’t know much (I think nobody does for certain) about the iOSification of MacOS X, but wouldn’t you agree with me that you haven’t heard anything good about it? Arguments include that this is another of the steps for Apple to rule everything what we can do with a computer. Others say that it makes no sense because iOS was not thought to be used in computers, and it does not apply. And last, but not least, that computers will end being too much easy to use and people will end up being dumb as a consequence. Sorry, but… WTF!
I’d rather want to hear positive comments, or new proposals to evolve the current state of things (even more from influent people as Tim O’Reilly).
I can promise that hate many things of my OS too. Last Tuesday I spend near 30 minutes of my live trying to close all the applications that I had consuming memory on my computer. It was a long list of greedy apps including Adobe Photoshop, Illustrator, two instances of Safari with ~15 tabs each, three huge Keynote presentations, Calendar, Mail, iChat and Spotify. I can assure you that in those 30 minutes I hated my Mac even more than O’Reilly could have hated his. They were the only 30 min that I had to prepare my History class of the week. But, meanwhile I was waiting I was also thinking how Apple can solve this situation if they are switching to what it seems to be an even less powerful OS? The answer is pretty easy! There is no way I can use all these apps simultaneously, still, I want them opened, ready for me each time I switch my desktop. However, if Apple engineers manage to sleep and recover individual states of apps from memory… my problem will be instantly solved. My Mac will effectively look like an iOS where I don’t care what is opened and what is not. All those apps will remain opened if needed, sleeping if not.
Additionally, that would probably justify the already discredited sandboxing. To assure its technical viability. It’s not about becoming a more «more user-hostile, with more attempts to lock you into Apple’s own apps» as O’Reilly points in his post.
Positive thinking! Complain about things, but contribute (or foresee) the good things to come ;) Any additional thoughts?
P.S. I have failed to make this post shorter… maybe the next one! :P
It has been a long time since my last post (dammit!! again!!!). But, this time, I have thought that from now on I will focus on a different strategy. I will follow the advice of my friend @jaytuit: I will try to express less things on single and simpler posts. Divide and Conquer! Less time spend on my side, that I hope translates in a more constant flow of contents. But it is also related with your time! I bet that my awesome readers don’t have much time either. So, I hope you enjoy the new format!
The next post is coming in a few minutes, stay tuned! :D
After a few weeks of a necessary break (even it was not an actual break, and I was working on my PhD harder than ever) I’m back with a new post. This is also my first “real-time” reaction to a world breaking news: “Steve Jobs presented his resignation as CEO of Apple Inc”.
I know many people do not want to read the “n-th” post about Jobs health and speculations about the future (funny example). If you are one of those, I invite you not to continue… because, somehow, as a computer history teacher and present Apple fan I feel the need to pay respect to such a story like this. Otherwise, consider that I understand this hate, its true, many people are talking nonsense. Some are nostalgic already of how awesome Steve Jobs was (sorry? He is still here!), others disbelieve Tim Cook as the new CEO (what is your effing basis?), or even focus on the fall of Apple stocks and Apple itself (this story is boring today, we’ll talk about it in a few years). Nonsense. I will try to focus on other things, and this time I promise a really short post.
Is Steve’s resignation that much worrying? Really?
Firstly, I want to remember to the reader (if any) that Jobs is not gone from Apple, he will be Chairman of the Board… If I’m not completely mistaken, this is a lot of power. Thereby, he still has the helm of this boat. But, If I’m wrong, nobody learnt anything from Steve Jobs in the last 35 years? Nobody is capable to take his place? I am completely sure that somebody (in or outside Apple) will take care of the future(s) of technological development with some of the knowledge that Steve has shared with the world.
I know its difficult to learn from other’s experience and mistakes, but many people have already learnt things by his side or even competing with him. It is not exclusive neither of Jobs nor Apple anymore. Nonetheless, students must grow before they start making profit of the knowledge ;)
In my humble opinion, everything started understanding that a computer is not “technology”, that it is something that we have to like. After that, many other personal traits have to be added to the list of essential things to learn to be an awesome CEO. Among them I see perseverance, perfectionism, and having a dreamy mind as the most valuable in this exact order.
The world is not ending. We are only adding a milestone to the history. The future is as bright as it was yesterday ;)