Telanova Blogs


Exploring off world and the work going on behind it

arXiv:1701.07550 [cs.RO]

It’s not something you’re likely to be thinking about as you wandered around the supermarket at the weekend picking up groceries. However, maybe just maybe you were potholing in a dark cave somewhere in the Pennines, just then you may have thought to yourself how agile you have to be, and how many different ways there are of getting from A to B, and how careful you have to be about getting stuck in a corner. By now you may be drawing parallels with IT and how that sounds amazing similar, be assured there are companies out there that can help you with that, but back on track to where this blog was going, which was to an off world object, be it moon, mars, asteroid.

You’ve probably seen the amazing Nasa mars lander photos of the surface, not to mention a few conspiracy theory add ons and alien sighting add ons, what lies below the surface is still something of a mystery. What is known is that there are caverns, and possibly lava tubes, now place yourself into theory mode, and consider how you would explore those tubes remotely.

Seeing the picture of the rover from Nasa doesn’t do justice to its size, and simply driving into a tunnel won’t help as there’s no light for recharging, no knowing how deep you might need to drop down, and once in unlikely to be able to transmit a signal back to earth.

Well the people, Himangshu Kalita, Steven Morad, Aaditya Ravindran, and Jekan Thangavelautham from the University of Arizona and Arizona State University have released their paper ( Path Planning and Navigation Inside Off-World Lava Tubes and Caves, 2018) on just those considerations.

The team produced a theoretical paper with some amazing ideas on how to overcome these issues and explore the caverns. Their SphereX bouncing robots will communicate back to a base station, carrying mirror tiles, which will be laid down to bounce a laser light from the rover outside for light, communications and power, they demonstrate how serious things can really look fun, and inspire the fact that space exploration once again is having a ball.

The algorithms and technology that they have for 3D mapping of tunnels with multi hopping robots, and their “TransFormer” strategy for recharging their robots deep inside a cave, can not be done justice in a short blog post.

Hop over and download their paper for yourself

Have an inspired week ahead, everything we do here today brings us closer to being there tomorrow


10 Reasons to Fly to the Cloud

You’ve been told that everyone is moving to the cloud. Now before you start booking plane tickets to see where they are, let’s take a look at some reasons why you might want to consider more cloud activity in the near future.


When you’re working in the cloud, the whole team have the same information at the same time, and many tools will allow live collaboration regardless of physical location.

Affective computing may soon be watching your Valence and Arousal

Affective computing is the interdisciplinary study of human emotional analysis, synthesis, recognition, and prediction ( Continuous Affect Prediction Using Eye Gaze and Speech , Jonny O’ Dwyer, Ronan Flynn, Niall Murray, Athlone Institute of Technology 2018)

The research the team carried out at the Athlone Institute of Technology, used open source software connected to cameras, to identify the emotions of the participant during testing, previous research has generally relied on headsets and eyetracking devices, using cameras being significantly less intrusive. They focused on a combination of eye gaze and speech to predict the arousal and valence of the participant. As in the image below these are cross related.

Arousal-Valence diagram from Abhang and Gawali

Arousal-Valence diagram from Abhang and Gawali

The arousal can be classed as the amount of energy in the emotion, and the valence the positiveness or negativeness of the experience. The team used OpenFace and then extracted 31 features from the raw data.

They combined this with the audio captured featuring 2268 different features of the speech. Work combined the speech and eye gaze results in a number of different methods, and from that they were able to create predictions of 0.351 for feature fusion and 0.275 for model fusion representing a 3.5% and 19.5% improvement for arousal and valence compared to unimodal performances.

What does this mean for the real world. One potential application will be in health and diagnosis where the true emotions of a subject are needed to understand the interaction, while not having intrusive devices affecting the results.


When is an elephant not an elephant ?


While you may be happy with social sites identifying what’s in your photo, there are people that go to lengths to ensure that the images posted online are not identifiable as to what the image contains. This means for automated systems scanning for specific types of image, they are fooled into believing that an elephant is a bowl of guacamole. The BBC reported last year ( ) that some images with a single pixel change could fool image classification software.

Thanks to research by Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo and James Storer of Brandeis University , their new published paper (Protecting JPEG Images Against Adversarial Attacks, 2018) identifies new algorithms which could be used to protect the images.

To give a visual, here’s the elephant example that the team worked on, among others.

Elephant images

As can be seen to the human eye, the 4 pictures look extremely similar. Working from left to right, the team start with the original image, then the image that has had the attack applied, this shows the classification as guacamole. The third photo is the image first being compressed to JPEG and then classified, this shows that it correctly classifies the elephant, but with low confidence, and considers it may be a chimpanzee. The final image uses the new Aug-MSROI method, which renders an image that is classified correctly with good confidence, and can still be decoded by normal JPEG routines.

So if you were thinking that by over/underlaying hidden images you could turn the photo of your pet kitten to be identified as a bowl of chilli, then think again, the likelihood of it still seeing a purring kitten is high.

64 financial ratios less likely to predict company failure than 6 qualitative measures

Financial Prediction

If you asked people 6 questions about a company, or you researched 64 financial indicators about a company, which would you think was going to predict a company will fail within a year ?

Jacky C K Chow, Aston Business School, Birmingham University knows the answer. Chow’s MBA dissertation[1], which studied and compared the options available. During her research, Chow looked at different methodologies of Machine learning to understand and predict the financial distress of a company. Using methods such as Linear Regression, Decision trees, Artificial Neural Networks amongst others, Chow took the available data on companies in Poland to generate predictions on the financial future of companies.

Also undertaken was the analysis, using the same methodologies, of second dataset from a previous study which had six qualitative measures (Industrial risk, Management risk, Financial Flexibility, Credibility, Competitiveness, Operating Risk ) from loan experts.

The results gave a very distinct answer. The loan experts qualitative answers, when analysed gave a better indication of insolvency than the 64 financials [see table below]. Furthermore, her findings indicated that by asking 4 questions of the loan experts on a company, their combined answers, could then be analysed via decision tree classifier to give a 90% accurate prediction. The cost of collecting from financial experts is more expensive. Also noted was that quality control must be applied as in some techniques the conclusion can easily be biased.


Need assistance with predicting your IT budget, speak to a telanova Consultant about how we can help your IT be more predictable.

Call now on 01344 567990

[1] Jacky C K Chow, Analysis of Financial Credit Risk Using Machine Learning, 2017

64 Financial Measures

  • 1 net profit / total assets
  • 2 total liabilities / total assets
  • 3 working capital / total assets
  • 4 current assets / short-term liabilities
  • 5 [(cash + short-term securities + receivables - short-term liabilities) / (operating expenses - depreciation)] * 365
  • 6 retained earnings / total assets
  • 7 EBIT / total assets
  • 8 book value of equity / total liabilities
  • 9 sales / total assets
  • 10 equity / total assets
  • 11 (gross profit + extraordinary items + financial expenses) / total assets
  • 12 gross profit / short-term liabilities
  • 13 (gross profit + depreciation) / sales
  • 14 (gross profit + interest) / total assets
  • 15 (total liabilities * 365) / (gross profit + depreciation)
  • 16 (gross profit + depreciation) / total liabilities
  • 17 total assets / total liabilities
  • 18 gross profit / total assets
  • 19 gross profit / sales
  • 20 (inventory * 365) / sales
  • 21 sales (n) / sales (n-1)
  • 22 profit on operating activities / total assets
  • 23 net profit / sales
  • 24 gross profit (in 3 years) / total assets
  • 25 (equity - share capital) / total assets
  • 26 (net profit + depreciation) / total liabilities
  • 27 profit on operating activities / financial expenses
  • 28 working capital / fixed assets
  • 29 logarithm of total assets
  • 30 (total liabilities - cash) / sales
  • 31 (gross profit + interest) / sales
  • 32 (current liabilities * 365) / cost of products sold
  • 33 operating expenses / short-term liabilities
  • 34 operating expenses / total liabilities
  • 35 profit on sales / total assets
  • 36 total sales / total assets
  • 37 (current assets - inventories) / long-term liabilities
  • 38 constant capital / total assets
  • 39 profit on sales / sales
  • 40 (current assets - inventory - receivables) / short-term liabilities
  • 41 total liabilities / ((profit on operating activities + depreciation) * (12/365))
  • 42 profit on operating activities / sales
  • 43 rotation receivables + inventory turnover in days
  • 44 (receivables * 365) / sales
  • 45 net profit / inventory
  • 46 (current assets - inventory) / short-term liabilities
  • 47 (inventory * 365) / cost of products sold
  • 48 EBITDA (profit on operating activities - depreciation) / total assets
  • 49 EBITDA (profit on operating activities - depreciation) / sales
  • 50 current assets / total liabilities
  • 51 short-term liabilities / total assets
  • 52 (short-term liabilities * 365) / cost of products sold)
  • 53 equity / fixed assets
  • 54 constant capital / fixed assets
  • 55 working capital
  • 56 (sales - cost of products sold) / sales
  • 57 (current assets - inventory - short-term liabilities) / (sales - gross profit –depreciation)
  • 58 total costs /total sales
  • 59 long-term liabilities / equity
  • 60 sales / inventory
  • 61 sales / receivables
  • 62 (short-term liabilities * 365) / sales
  • 63 sales / short-term liabilities
  • 64 sales / fixed assets

Email Facebook Google LinkedIn Twitter

We use cookies to provide you with the best possible experience in your interactions on our website

You agree to our use of cookies on your device by continuing to use our website

I understand