Flipkart has reportedly paid around Rs. 45.4 crores to acquire a majority stake in payments firm FX Mart, as per a report citing documents available with Registrar of Companies (RoC).
The Livemint report further mentions that Flipkart is planning to launch a payment service on its mobile apps in the next three months. Flipkart plans to integrate the payment service in its own platform as well as in Myntra, with two senior Flipkart executives joining the board of FX Mart.
Founded by Amit Narang in 2013, the firm is headquartered in Zirakpur, Mohali, and has eight offices and an agent network of over 1,000 locations across the country.
FX Mart owns a prepaid licence issued by Reserve Bank of India (RBI) which will allow Flipkart to offer a digital wallet on its app and avoid paying a cut to third-party wallet services.
While Flipkart supports cash on delivery, the move suggests that the firm is looking to make inroads in the online payments and mobile wallet space. Flipkart had previously made a strategic investment in September 2014 in mobile payments firm NGPay, following the shut down of PayZippy, the secure wallet built by Flipkart. We’ve reached out to Flipkart for a comment on the acquisition.
In recent news in the payments space, cab aggregator Ola announced plans to integrate its mobile wallet with Oyo Rooms, Lenskart, Saavn, among other startups.
The top players in the mobile wallets space are Paytm, Freecharge, MobiKwik, PayMate, CitrusPay, andOla Money.
Google released an update to its Maps app for Android last week that added thumbnail previews of a location in Street View. Now, Google Maps for Android has received another update, adding few more features such as a new navigation UI for users. Additionally, Google has started rolling out Google+ Collections, which was already available on Android and on Web, for users on iOS.
The new Google Maps for Android update (see above screenshot) can be expected to start rolling out in few days; those users who cannot wait for the update can download and sideload the Google-signed apk file. The new Maps update bumps the version to 9.14 and can be expected to hit Google Play India in the coming days. First reported by Android Police, the new Maps update primarily focuses on improving the overall navigation.
The Google Maps for Android navigation has received new interface for settings (above image: left shows old interface and right shows new) and now the thumbnail view while navigation search has been replaced with a bigger image showing the routes. With the new update, the maps on the navigation page can be scrolled and zoomed which was notably not possibly in the earlier thumbnail view. The selection for mode of transport has been shifted slightly below the start and destination fields.
The new update also brings a more details on the route page which can be accessed by the user. The details about possible slowdown on the route have also received slight improvement and now show more information than earlier version. The toggles for car, public transport, walk, and cycling modes now show more informative cards which can be dragged to offer more information.
The option to pick routes by touch is now shown with estimated ETA including details about possible slowdowns or road blocks. When visiting a new place or restaurant, we are usually unaware about how busy is the place during the day. With the update, Google Maps will now offer a chart for businesses showing details about how busy the place is during each hour of the day, a feature announced late last month.
In other news, the Pinterest-like ‘Collections’ feature for Google+ users on Android and the Web wasreleased in May, and was promised to arrive soon on iOS. The feature has been release for iOS users. The Collections feature for Google+ allows users to create specific posts centred around topics and comprising videos, photos, and more.
Taxi service provider Meru on Tuesday launched what it calls the first-of-its-kind ride sharing service ‘CarPool’ for its customers across the country.
Integrated into the Meru Cabs mobile app, CarPool offers the ‘personal car’ ride sharing option for people travelling in the same direction or area.
“This is a pure customer to customer service, and we are making it available across the country,” Siddhartha Pahwa, chief executive of Meru Cabs, told PTI.
“We are currently not monetising it, but we will work towards charging a margin fee in future once the service is well established,” he added.
The company will conduct thorough credential checks of all the customers who offer to give rides to people, including taking their details of driving licence , Aadhar, and PAN.
CarPool by Meru also comes with a wallet partnership with mobile wallet Paytm to offer its customers cashless travel.
According to a recent survey undertaken by Regus, a leading global workplace solutions provider, 26 percent of commuters in India spend over 90 minutes per day travelling to work and meetings.
In addition, about 16 percent of all commuters drive to work on their own and on average 67 percent of respondents drive to work using their car, indicating the huge potential for carpooling among the commuters.
“The top 20 cities alone have over 12 million cars and the opportunity is to bring them on a popular and trusted platform like Meru,” Pahwa said.
“The CarPool initiative aims to reduce the one-person per vehicle issue through sharing a ride with another person headed in the same direction with added convenience and economy of travel,” he added.
There is also a growing consciousness among people about the need to reduce congestion on roads, pollution, carbon foot print and stress associated with travel, he pointed out.
Buying a painting is an expensive affair. The reading on the price tag, moreover, grows exponentially if that piece of art has been made by a great artist like Gerhard Richter, or M.F. Husain. While you can always buy a digital copy for cheap, or use your photoshop skills to create a painting, the technology has largely failed to offer on par experience. That’s changing. German researchers have released apaper detailing how a computer algorithm can turn your ordinary images into professional looking paintings.
The researchers have released images that have a canny resemblance with the works of the all-time greatest painters, Vincent Van Gogh, Picasso, and Edvard Munch. These images took just an hour to come to life, the researchers noted. In the days to come, as they further optimise the algorithm, they expect the processing speed to get faster.
The algorithm, which hasn’t been released to the public yet, showcases the advancements we have made in the field of deep learning. The field in which computers identify and classify patterns through huge data sets, is designed to mirror the way we humans think and work.
The researchers are able to blend an ordinary image with a painting from an artist using something called convolutional neural networks. “The key finding of this paper is that the representations of content and style in the Convolutional Neural Network are separable. That is, we can manipulate both representations independently to produce new, perceptually meaningful images,” wrote (PDF) the authors.
“Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision, our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.”
There are some complications, however. The researchers noted that one needs to maintain the right balance between style and content, as more focus on the former would produce an image that wouldn’t look similar to the original content. The researchers plan to release another paper on this later this year.
Google will for the first time take its marshmallow-shaped self-driving car beyond its home turf in California and onto the streets of Austin, Texas.
In the next few weeks, Google will start testing a few of its prototype vehicles in the area north and northeast of downtown Austin, the company said Monday. The cars will navigate the same area where Google’s Lexus SUV self-driving vehicles have driven for the last couple months.
Both sets of vehicles will continue to have human drivers on board for the testing, Google said.
The company said in July that Austin would be the next location for tests of its self-driving cars, but Google did not say whether the tests would include the Google-designed prototype.
Bringing the cars to Austin will allow Google to test its vehicles in a location with road conditions, traffic patterns and driving situations that differ from those in Google’s home base in Mountain View, California, the company has said.
Over the weekend, Google held an event at a children’s museum in Austin with its cars on display.
During a question-and-answer session, kids said the prototype vehicle looked like a gumdrop, a bug, a computer mouse, and “the future,” Google said in a blog post on Monday.
In a self-driving car, kids said they’d most enjoy eating tacos and reading, Google said.
A spokeswoman for the Austin Transportation Department said there is no Texas law that would require Google to receive the department’s approval for testing.
Bryce Bencivengo, a spokesman at Austin City Hall, said that from a regulatory standpoint, self-driving cars in Austin are currently treated the same way as regular cars, and must obey all traffic laws. The results of Google’s tests might pave a way forward to create a framework for new self-driving car regulations, he said.
Google has made it clear that it isn’t a fan of Flash-based content, so it is going to start aggressively block it on the Chrome Web browser. The company has set a date of September 1, which is when Chrome will block all Flash content on a website that isn’t “central to the webpage.” The company had first announced the move back in June with the aim to save laptop battery life.
Starting Tuesday the company will block Flash-based content, and also pause auto-playing videos by default on non-video websites. The company had started blocking Flash content earlier this year in a beta version of Chrome. The block on Flash should significantly bolster security against malicious Flash ads while also improving the power consumption. “When you’re on a webpage that runs Flash, we’ll intelligently pause content (like Flash animations) that aren’t central to the webpage, while keeping central content (like a video) playing without interruption. If we accidentally pause something you were interested in, you can just click it to resume playback. This update significantly reduces power consumption, allowing you to surf the Web longer before having to hunt for a power outlet.”
The update will block Flash content not central to webpage by default, but if users want to enable the feature before the updated Chrome browser rolls out to them, they can go to Advanced Settings >Privacy and select Detect and run important plugin content. Once the update rolls out, the setting is enabled by default, but users can always switch back to Run all plugin content if desired.
Notably, Mozilla’s Firefox browser had during a spate of Flash vulnerabilities recently decided to block all versions of the Adobe Flash plugin in Firefox by default.
As for the advertisers, Google is recommending they switch to HTML5 ads to seamlessly continue their respective ad businesses. The company is also offering these advertisers tools to help with the transition. Advertisers who choose to continue with Flash ads are likely to see a significant drop in the user engagements in their ads.
Adobe’s Flash has become infamous for its security breaches and also resource hogging glitches. The technology has been criticised by many including late Steve Jobs (result: iPhone and iPad don’t support Flash), Facebook’s CSO, and Google among others in the past. Google ditched Flash over HTML5 a couple of years ago on its video portal YouTube.
Given Chrome’s popularity, the latest development might just be the final nail in the coffin for Flash. Chrome is the second most popular desktop Web browser available on the Web, as per marketing research firm Net Applications.
With Uber – the taxi-ordering app – now valued at $40bn (£25.5bn), it is no surprise others are trying to take old-school, real-world services on to your mobile phone. ZipJet (iOS, Android, free) attempts to move the dry-cleaners into your pocket. Only in London at the moment, the interface is slick – finding your location automatically and scanning your credit card number with the iPhone’s camera, making it a breeze to order. Put in what it is you need to be cleaned, then choose a half-hour collection and delivery slot. The cost compares well with a service wash at a local launderette – £12.50 for a bag of washing and £10 for a two-piece suit – and it works fantastically, but while it is sure to be a big hit in New York, where communal facilities are the norm, it seems unlikely it will ever be more than a niche product in the UK. Equally, Zomato (iOS, Android, Windows Phone, BlackBerry, free) is the latest restaurant finder, but what makes it stands out is the breadth of its service. Select to search for coffee-shops, a plethora of culinary genres, or even somewhere specialising in hot chocolate, and the app locates them on a map. From there you can browse the menu and book a table. Good for the tastebuds, if not so beneficial to the waistband.
Almost every new gadget has a camera these days, which means almost every moment can be captured for posterity. But the sheer number of photos we collect as we go about our lives is becoming a nightmare of organisation.
To start with, how do you store photos? In virtual albums? One massive mess sorted by date? By people, or place? Or perhaps by camera?
Then there’s the pile of poor photos, the ones out of focus, misaligned or just plain missing the subject; do you weed them out, and how do you go about doing that?
Apple’s iPhoto was one of the best ways to organise photos on a Mac for non-professionals. Aperture was Apple’s professional solution. Both have been replaced by just “Photos”, released last week with the latest version of OS X Yosemite 10.10.3.
Photos is a complete rewrite of Apple’s photo management software, although you wouldn’t know it if you weren’t paying attention.
Migrating from iPhoto is relatively straightforward. Fire up the free Photos app, select your iPhoto library and hope for the best. I migrated just under 40GB of photos from an iPhoto library, but Photos insisted on “repairing” it first. Everything went without a hitch, but I backed up the library first, just in case.
Once the migration is done, the app presents users with what appears to be iPhoto, just without the sidebar (which can be reinstated).
Events are now called “Albums”, which makes more sense, “Faces” are still there and tags people using facial recognition, while “Photos” is now organised into “Moments” similar to an iPhone or iPad.
In fact, it’s difficult to tell what’s changed on the surface, but the app loads and operates noticeably faster and takes up 66MB of space compared to 1.7GB – a vast improvement especially for laptops with limited hard drive space and processing power.
In terms of actually organising your photos it’s the same deal as iPhoto – sort them manually by album or by name of album, or just view them as one big pile. Smart albums can automatically generate collections based on picture information such as capture location or keywords you’ve manually applied to the photos.
Albums can be sorted into folders of albums, which is quite useful for grouping collections together, while Photos automatically groups certain types of media together such as videos, time-lapse, burst or slow-mos taken with an iPhone.
Photos is not a revolution in solving organisational issues. There’s no way to automatically find duplicates, it has no intelligence to find those photos you could safely get rid of that are out of focus or just plain poor, and you can’t sort albums by date, only manually or by name.
Photos is a big step up in image processing, however, with a raft of easy to use but powerful image editing features. Adjusting the lighting, saturation and turning the image black and white is easy using simple sliders, or can be broken down into individual settings for fine tuning. Tools for adjusting white balance, sharpening an image and adjusting colour levels are all easier to use than most other image editing apps, with impressive results.
A range of Instagram-style colour filters are available, as is an auto enhance feature, while crop, rotate, resizing, healing and red-eye removal tools are all there. For the vast majority of image edits and quick touchups, Photos is all most will need.
Neither iPhoto or Aperture will be updated going forward, which means users of either are faced with a dilemma – upgrade to Photos for free, switch to a more advanced product like Adobe Lightroom, or stay stuck with their current version until it can no longer run.
For most users Photos is at least as good as iPhoto. It’s not revolutionary in the organisation department, but it is an improvement in speed and image editing. For Aperture users, however, it’s a step down. If you made batch adjustments based on camera, lens or lighting settings you will need to look elsewhere.
The Babolat AeroPro Drive Play connected tennis racket comes with a sensor embedded in the handle plus matching app that syncs via Bluetooth, recording the minutiae of your playing performance. The truly fearless (or perhaps just the very best) players can share their stats with other users via the app. Yet although apparently endorsed by tennis god Rafael Nadal, his superhuman performance statistics are nowhere to be seen on the community page, which is a little disappointing.
For mere mortals, setting up the racket is uniquely frustrating in only the way that an unsuccessful Bluetooth syncing experience can be. I turn the racket on. I press sync on the app. Bluetooth is on. The racket says it is connected but won’t sync with the app. Rinse and repeat for about half an hour.
I turn everything off, have a cup of tea and then try again, and the same processes mysteriously work this time. Once my blood pressure drops I head off to the court, connected, synchronised racket in hand. The only thing I have to remember is to press one of the two buttons on the butt of the racket before I start training. Both these buttons are, thankfully, only just proud of the end of the racket and not obtrusive when I hold it. So that’s a good start.
A blue light flashes slowly, signalling that it is recording. Overnight, I’d charged the racket by connecting a USB cable from the handle to my laptop, giving me an odd moment of modern joy at seeing a tennis racket plugged in. If you hear anything about “the internet of things” and have no idea what that means, then this is a glimpse of what that feels like; everything around you will be connected in some form. Next – they are coming for your tennis racket.
Whether it is worth trading the blissful ignorance of my unskilled but highly enjoyable tennis lessons for a fleet of comprehensive and competitive performance statistics – and £300 – remains to be seen.
In most ways, my training session is no different to when I play with an antiquated racket. But there is a discreet, blinking light on the end of my racket that reminds me it is recording the sessions. On a full battery charge I can record a little more than five hours of play, and reading through the results is compulsive. Being French, the app likes to occasionally display a logo that sticks two fingers up at the user. I’m hopeful that this means something different in France than in the UK, although my one-handed backhand is definitely too flat, it’s true.
It would be easier if the racket synced with the app automatically but, as it is, I upload one session at a time. The results are unforgiving in their detail: one hour and 26 minutes on a hard court outside, 531 shots and 1,026 calories burned (I’ve already told it my height and weight, though this is still waving a stick in the dark, I feel), 351 forehands, 175 backhands and three smashes (I know the latter is way under the number I hit, so that makes me wonder how many other inaccuracies are buried in here). Babolat’s range of smart racquets are coming up to two years old, currently the sensors can’t distinguish between a volley and a groundstroke, so we should expect the software to be more refined – and accurate – as it is developed.
But the detail, if we are happy to accept it is largely correct, is fascinating: the app will show where on the strings I hit the ball, broken down by serve, forehand or backhand (only 25% of my backhands were in the sweet spot, dammit). It also assigns an overall “pulse” for my game, which combines technique, power and endurance to rate me at 41.63%. On a seemingly never-ending list of other Play users, this heartbreakingly places me at #4,217. How many smart rackets have they sold? 4,217?
There’s always an initial buzz with activity trackers, a curiousness about quantifying ourselves in a more tangible way. In my experience, we become intensely interested for about three months, which is long enough to be able to relate the data to our actual activity in some way, and then we get bored. This racket is a little different, in that it’s a serious racket whether you connect it or not.
It’s extremely handsome, graphite and tungsten, light, firm and a very stiff play. It’s the kind of racket a beginner, even a twice-weekly keen-but-crap player like myself, could only dream about doing justice to. But the data really is compelling, revealing encouraging glimmers of a developing forehand topspin while ruthlessly declaring my serve to be, well, awful. That said, if I want proper motivation I need my real-life coach; he quipped that I hit the frame so much I may as well take the strings off my racket. I’ll keep on with the lessons.
Google’s smartwatch operating system, Android Wear, is on its third major revision, and this time it is a coherent and useful platform that does what a smartwatch does best – handle notifications – making it the best platform out there.
Android Wear watches are only compatible with Android devices with version 4.3 Jelly Bean and up. Pairing the two is easy. The Android Wear application downloaded from Google Play handles the setup.
Turn the watch on, note the name of it and find it in the list of devices inside the Android Wear app. Hit yes to pair, let it sync for about a minute, and you’re good to go.
At its heart, Android Wear is all about cards. Cards can be apps, notifications, information, controls and interactive tiles. They pop up as and when required, for glanceable information and more.
A notification, for instance, can display a small snippet of information such as an email subject and sender. Tapping on the email allows you to read the whole thing. A swipe to the left and you can archive the email or reply to it via voice, emoji or canned answers.
Navigating Android Wear is simple. Swipes to the left for more, to the right to dismiss or go back, up to scroll and down to hide. Tapping the watch face brings up the app launcher, new for Wear 5.1, with a small list of apps. Swipe to the left to bring up a list of recent and favourite contacts to send them an email or a text, or call them, then once more to talk to Google Now.
Users can also just say “OK Google” to the watch to fire up Google Now for voice searches or commands for setting timers, making notes or launching apps, for example.
A palm over the face puts the watch to sleep, as does a press of the button if there is one. Wrist-flick gestures can also be used to scroll up or down through cards without needing another hand.
Android Wear 5.1 introduced the ability to connect to Wi-Fi direct from the watch. Wear mirrors the connections on your smartphone, pulling Wi-Fi passwords and networks so there’s no need for manual setup.
When out of range of Bluetooth, the watch can automatically switch to Wi-Fi to connect to the phone as long as it has internet access. It works both across the same Wi-Fi network and remotely over the internet using Google’s servers, which means notifications, searches and any other function works even when not in the vicinity of the smartphone.
It works reasonably well, with little lag over a local Wi-Fi network, but the handover between Bluetooth and Wi-Fi isn’t the smoothest, taking around 20 seconds. It is only noticeable if you’re trying to use the watch at the time.
Android Wear can also store music from Google Play music on a smartphone and connect directly to Bluetooth headphones to play it back. Typically up to 4GB of music can be stored, either from playlists or albums and browsed through album cards. Syncing the tracks over Bluetooth takes a while and hits battery life quite hard, meaning that doing it overnight while charging is recommended.
Wear 5.1 now includes a lockscreen, which replicates Android’s pattern lock and is meant to kick in when the watch is taken off your wrist. I have had issues getting it to work on certain watches.
The primary function of any smartwatch is to display notifications from a smartphone and Wear does it best out of any smartwatch platform, including the Apple Watch, Pebble and Samsung’s Tizen.
The way Wear connects with Android on the smartphone means any notification shows up if you want it to, without the developer of the app needing to do anything.
If the developer has added quick actions for the notification, they show up, too, while small extensions can be made to the app to provide more options on the watch.
The card interface is perfectly suited for displaying notifications. New ones crop up at the bottom of the screen with a small snippet and can be expanded. It means they’re ever present unless dismissed, with the latest one shown first, which makes triaging notifications easy and fast.
Apps you don’t want notifications from on the watch can be blocked, turning it into a filter to prevent overload. Wear also obeys Lollipop’s “none, priority or all” notification schemes or KitKat’s silenced mode, depending on what version of Android is running on the connected phone.
Android Wear has no keyboard as standard, instead relying on canned answers such as “I’m on my way”, voice dictation or emojis. Voice dictation works well even in relatively noisy environment, but is difficult for complex messages. Emojis are often the best way to respond, either picked from a list or by Google recognising a finger drawing of what you want, which works surprisingly well. All responses require a solid data connection on the smartphone.
Apps built-in and otherwise
Because Android Wear is mostly based around notifications and information snippets, many dedicated apps simply aren’t needed.
Google’s apps include Fit, standard time-keeping apps, Google Play Music, Agenda for calendar, a torch app and Google search.
Google Play Music allows caching of music on the watch, or control of music on a connected smartphone, while Fit tracks steps, activity and monitors heart rate, feeding back to the Google Fit Android app.
Other Google apps, such as Gmail, Camera and Maps, are triggered through searches or via the phone, with the later capable of delivering turn-by-turn walking or driving directions to the wrist.
Manufacturers such as Motorola, Asus and LG include their own fitness apps for monitoring heart rate and activity, as well as other apps for controlling smartphone apps. Often these duplicate functions, but users can pick which app they want to use as the primary heart rate monitor, for instance, either on the watch or through the Android Wear app on a smartphone.
Dedicated Android Wear-only apps are few and far between, with notable exceptions being watch faces and the UK Trains app that displays train times on the wrist.
Most apps are extensions of Android apps and the list is hundreds long, including big names such as Evernote, WhatsApp, Facebook Messenger and Uber.
Fitness apps such as Runtastic, Strava and Runkeeper are also available, taking advantage of built-in GPS functions in some watches.
Google ships a small but functional selection of watch faces with Android Wear. Some of the best, however, are third-party watch faces, which can add any number of features and functions; some free, some paid.
Manufacturers also bundle their own watch faces with their watches. Motorola’s Pascual is of particular note because it seamlessly integrates calendar appointments into an analogue face. Others include the weather, steps, battery life and even speed, should you need it.
Watch faces have a lower-power ambient mode, which is displayed when the watch face isn’t actively being used. This can be turned off to extend battery, but most Android Wear watches make it through a day with a watch face constantly displayed.
Cinema mode disables the ambient screen and the wrist-turn gesture so that the screen only lights when tapped or a button is pressed, which is handy for situations where the screen lighting up would be distracting.
When and where?
Android Wear 5.1 is launching on LG’s new G Watch Urbane and will be rolling out to all of the Google’s watches released since June last year. Not all of them will support all the new features. The LG G Watch, for instance, doesn’t have Wi-Fi, while Sony’s Smartwatch 3 is the only one to have GPS.
Android Wear 5.1 has reduced Google’s emphasis on talking to your wrist, which is a good thing. The new menu system makes it easier to get to apps and settings, and the simple swipe-based interface is intuitive.
The emoji-drawing support is excellent and being able to connect remotely to a smartphone using Wi-Fi is useful for when Bluetooth won’t stretch far enough.
Android Wear’s notification-handling and quick, useful interactions powered by Google Now make it the best smartwatch platform currently available, but only if your life is plugged into Google services such as Gmail, calendar and Play Music.