Technology

Blog

sales: 01344 567990

support: 01344 989530

Technology

  • When is an elephant not an elephant ?

    Elephant

    While you may be happy with social sites identifying what’s in your photo, there are people that go to lengths to ensure that the images posted online are not identifiable as to what the image contains. This means for automated systems scanning for specific types of image, they are fooled into believing that an elephant is a bowl of guacamole. The BBC reported last year ( http://www.bbc.co.uk/news/technology-41845878 ) that some images with a single pixel change could fool image classification software.

    Thanks to research by Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo and James Storer of Brandeis University , their new published paper (Protecting JPEG Images Against Adversarial Attacks, 2018) identifies new algorithms which could be used to protect the images.

    To give a visual, here’s the elephant example that the team worked on, among others.

    Elephant images

    As can be seen to the human eye, the 4 pictures look extremely similar. Working from left to right, the team start with the original image, then the image that has had the attack applied, this shows the classification as guacamole. The third photo is the image first being compressed to JPEG and then classified, this shows that it correctly classifies the elephant, but with low confidence, and considers it may be a chimpanzee. The final image uses the new Aug-MSROI method, which renders an image that is classified correctly with good confidence, and can still be decoded by normal JPEG routines.

    So if you were thinking that by over/underlaying hidden images you could turn the photo of your pet kitten to be identified as a bowl of chilli, then think again, the likelihood of it still seeing a purring kitten is high.

  • 64 financial ratios less likely to predict company failure than 6 qualitative measures

    Financial Prediction

    If you asked people 6 questions about a company, or you researched 64 financial indicators about a company, which would you think was going to predict a company will fail within a year ?

    Jacky C K Chow, Aston Business School, Birmingham University knows the answer. Chow’s MBA dissertation[1], which studied and compared the options available. During her research, Chow looked at different methodologies of Machine learning to understand and predict the financial distress of a company. Using methods such as Linear Regression, Decision trees, Artificial Neural Networks amongst others, Chow took the available data on companies in Poland to generate predictions on the financial future of companies.

    Also undertaken was the analysis, using the same methodologies, of second dataset from a previous study which had six qualitative measures (Industrial risk, Management risk, Financial Flexibility, Credibility, Competitiveness, Operating Risk ) from loan experts.

    The results gave a very distinct answer. The loan experts qualitative answers, when analysed gave a better indication of insolvency than the 64 financials [see table below]. Furthermore, her findings indicated that by asking 4 questions of the loan experts on a company, their combined answers, could then be analysed via decision tree classifier to give a 90% accurate prediction. The cost of collecting from financial experts is more expensive. Also noted was that quality control must be applied as in some techniques the conclusion can easily be biased.

    #telanovaReporter

    Need assistance with predicting your IT budget, speak to a telanova Consultant about how we can help your IT be more predictable.

    Call now on 01344 567990

    [1] Jacky C K Chow, Analysis of Financial Credit Risk Using Machine Learning, 2017

    64 Financial Measures

    • 1 net profit / total assets
    • 2 total liabilities / total assets
    • 3 working capital / total assets
    • 4 current assets / short-term liabilities
    • 5 [(cash + short-term securities + receivables - short-term liabilities) / (operating expenses - depreciation)] * 365
    • 6 retained earnings / total assets
    • 7 EBIT / total assets
    • 8 book value of equity / total liabilities
    • 9 sales / total assets
    • 10 equity / total assets
    • 11 (gross profit + extraordinary items + financial expenses) / total assets
    • 12 gross profit / short-term liabilities
    • 13 (gross profit + depreciation) / sales
    • 14 (gross profit + interest) / total assets
    • 15 (total liabilities * 365) / (gross profit + depreciation)
    • 16 (gross profit + depreciation) / total liabilities
    • 17 total assets / total liabilities
    • 18 gross profit / total assets
    • 19 gross profit / sales
    • 20 (inventory * 365) / sales
    • 21 sales (n) / sales (n-1)
    • 22 profit on operating activities / total assets
    • 23 net profit / sales
    • 24 gross profit (in 3 years) / total assets
    • 25 (equity - share capital) / total assets
    • 26 (net profit + depreciation) / total liabilities
    • 27 profit on operating activities / financial expenses
    • 28 working capital / fixed assets
    • 29 logarithm of total assets
    • 30 (total liabilities - cash) / sales
    • 31 (gross profit + interest) / sales
    • 32 (current liabilities * 365) / cost of products sold
    • 33 operating expenses / short-term liabilities
    • 34 operating expenses / total liabilities
    • 35 profit on sales / total assets
    • 36 total sales / total assets
    • 37 (current assets - inventories) / long-term liabilities
    • 38 constant capital / total assets
    • 39 profit on sales / sales
    • 40 (current assets - inventory - receivables) / short-term liabilities
    • 41 total liabilities / ((profit on operating activities + depreciation) * (12/365))
    • 42 profit on operating activities / sales
    • 43 rotation receivables + inventory turnover in days
    • 44 (receivables * 365) / sales
    • 45 net profit / inventory
    • 46 (current assets - inventory) / short-term liabilities
    • 47 (inventory * 365) / cost of products sold
    • 48 EBITDA (profit on operating activities - depreciation) / total assets
    • 49 EBITDA (profit on operating activities - depreciation) / sales
    • 50 current assets / total liabilities
    • 51 short-term liabilities / total assets
    • 52 (short-term liabilities * 365) / cost of products sold)
    • 53 equity / fixed assets
    • 54 constant capital / fixed assets
    • 55 working capital
    • 56 (sales - cost of products sold) / sales
    • 57 (current assets - inventory - short-term liabilities) / (sales - gross profit –depreciation)
    • 58 total costs /total sales
    • 59 long-term liabilities / equity
    • 60 sales / inventory
    • 61 sales / receivables
    • 62 (short-term liabilities * 365) / sales
    • 63 sales / short-term liabilities
    • 64 sales / fixed assets

  • Affective computing may soon be watching your Valence and Arousal

    Affective computing is the interdisciplinary study of human emotional analysis, synthesis, recognition, and prediction ( Continuous Affect Prediction Using Eye Gaze and Speech , Jonny O’ Dwyer, Ronan Flynn, Niall Murray, Athlone Institute of Technology 2018)

    The research the team carried out at the Athlone Institute of Technology, used open source software connected to cameras, to identify the emotions of the participant during testing, previous research has generally relied on headsets and eyetracking devices, using cameras being significantly less intrusive. They focused on a combination of eye gaze and speech to predict the arousal and valence of the participant. As in the image below these are cross related.

    Arousal-Valence diagram from Abhang and Gawali

    Arousal-Valence diagram from Abhang and Gawali

    The arousal can be classed as the amount of energy in the emotion, and the valence the positiveness or negativeness of the experience. The team used OpenFace and then extracted 31 features from the raw data.

    They combined this with the audio captured featuring 2268 different features of the speech. Work combined the speech and eye gaze results in a number of different methods, and from that they were able to create predictions of 0.351 for feature fusion and 0.275 for model fusion representing a 3.5% and 19.5% improvement for arousal and valence compared to unimodal performances.

    What does this mean for the real world. One potential application will be in health and diagnosis where the true emotions of a subject are needed to understand the interaction, while not having intrusive devices affecting the results.

    #telanovaReporter

  • As technology uses data smarter, sharing economy is growing

    Umbrellas

    Shanghai, Friday, rain expected. Another rainy day in the city, leaving for work by public transport is quite popular, and carrying an umbrella for the regular days of rain the norm. However, all this might be about to change, and before you begin thinking of a conspiracy to change the weather, it’s the smaller item that is changing.

    With advances in technology, and decrease in cost of components, there are now 10,000 umbrellas on the streets of Shanghai, each can be rented instantly. Some may remember when this happened before at the beginning of 2017, that project didn’t get off to such a great start, losing 300,000 umbrellas. Justin Jia, Zhejiang Tianwei Umbrellas, has taken up the challenge, the Chinese entrepreneur has created a pioneering sharing app for umbrellas, and currently you can rent one of the 10,000 umbrella’s in the city.

    Using both a deposit, and then scoring on the person’s behaviour, eg. reporting , damaging, returning on time, etc. the data is key to the profitability. Rather than monitoring the umbrella, it is more the person that is being monitored. Much like you may improve your credit rating, soon you’ll be able to improve your share-ability rating.

    So next time it is a rainy day, you soon may be able to leave your umbrella at home.

    #telanovaReporter

  • Code Smells, a rise to maturity

    The smell of Orchids

    Today’s world is full of an increasing amount of program code. Back in 1999 Martin Fowler[1] defined the basis of Code Smells. Smells, being the inherent way humans in nature detect bad, and good things, likewise, Code has a smell, be it bad or good. A bad code smell being code that contains bad programming techniques, duplicate code, ie. poor quality. A new paper called Code Smells, by Peter Kokol, Milan Zorman, Bojan Žlahtič, Grega Žlahtič [2] , has been published.

    Kokol’s paper[2] analysed the rise of discussion around code smells. Using bibliometrics to analyse research papers which contain references to code smells, Kokol was able to map and detect the changes in frequency and geographical distribution of papers.

    Their results highlighted 337 publications which contained references and of those 70% were related to conference proceedings. Which they concluded may mean that code smells is still in the rising state of maturity.

    They plotted the details on a timeline and identified that the largest rises were in 2009, then in 2014. They also identified which countries were using the term the most, and as might be expected USA was top, with almost twice the next country, Italy. Italy contained the individual institution that had produced the most papers, with 19 papers published by the Universita degli Studi di Milano.

    The research papers indicated that code smell research was split into 3 themes, smell detection, software refactoring, development & anti-patterns. Of these themes code software development and anti-patterns, was the most popular themes, using anti-patterns and knowledge of software development problems code quality can be increased.

    Overall an interesting and highlighting paper that shows that in the future, machine learning, and other analysis tools may be used against software development code to identify if it smells of sulphur or wild orchids.

    [1] M. Fowler, Refactoring: Improving the Design of Existing Code., Reading: Addison - Wesley, 1999.

    [2] Peter Kokol, Code Smells, 2018

    #telanovaReporter

    Looking to test the quality of your IT configuration, talk to our consultants about what changes you can make to get that wild orchid smell :

    Call telanova on 01344 567990

  • Counting the people just got a lot more accurate. How many people in this scene

    Crowd

    The next US president may well be able to confirm the number of people attending his inauguration, and this down to deep learning and convolutional neural networks.

    Thanks to the team, Yuhong Li, Xiaofan Zhang, and Deming Chen from Beijing University of Posts and Telecommunications and the University of Illinois at Urbana-Champaign, their recently published work, CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes, may revolutionise how accurate and fast crowd counting will be in the future.

    The congested scene analysis work is based on 2d images, and can break down an image and calculate where the higher density of crowds are located. This will help organisers in the future to understand how to keep people flowing, and for static situations, how to keep the density low enough to ensure attendee comfort.

    The teams work focused on density map creation, and by using dilated CNN on crowd counting for the first time, the system outperforms other state-of-the-art crowd counting solutions. Demonstrating their approach using five public datasets, the overall model, is smaller, more accurate, and easier to train and to deploy.

    Using the ShanghaiTech crowd dataset, which is a dataset of 1198 images with a total of 330,165 people, split over 2 areas, one being a highly congested scene, and one being sparse crowd scenes, the method improved the accuracy, dropping the error rate by 7%.

    On the other datasets, the CSRnet system continued to outperform in most of the scenes, only bettered in 1 out of 5 sets by CP-CNN method.

    The method doesn’t need to be reserved for counting people, it can also be used for counting vehicles, and using the TRANSCOS dataset it outperformed all other methods.

    In summary, next time you’re in a photo, there may be a new method of counting how many other people are in the picture with you, with a better accuracy than previously attained. Time to invest in some camouflage.

    #telanovaReporter

  • Did you see that stop sign

    Crowd

    Hee Seok Lee and Kang Kim have recently been working with Convolutional Neural Networks in detecting Traffic Signs. To most drivers, a traffic sign is normally fairly clear and identifiable. However, to a computer figuring out where a sign is in a flat image is not so simple.

    The system isn’t expected to handle having trees growing over the signs, so any signs that aren’t fully visible, so it is a time to get out the pruning shears. However, CNN is really proving itself when it comes to recognising the shapes at speed.

    Using different parameters such as using a lighter base network, ie. deeper neural networks give better results, but to do that requires a higher level of processing power than the rewards. Changing the resolution also allowed recognition of the sames signs at a high frame rate, and cropping the image also assisted in speeding up recognition, ie. far away traffic signs in the centre of the image of a vehicle camera aren’t recognisable, and the normal location for traffic signs will be on the side of the road, so detection can be targeted at the appropriate locations.

    In summary the team identified that for the best accuracy at speed, by using the latest architectures, of object detection such as feature pyramid networks and multi scale training, they achieved a 7FPS on a low power mobile platform, ideal for your self driving car, or for alerting drivers to signs that they may have missed!

    #telanovaReporter

  • Exploring off world and the work going on behind it

    arXiv:1701.07550 [cs.RO]

    It’s not something you’re likely to be thinking about as you wandered around the supermarket at the weekend picking up groceries. However, maybe just maybe you were potholing in a dark cave somewhere in the Pennines, just then you may have thought to yourself how agile you have to be, and how many different ways there are of getting from A to B, and how careful you have to be about getting stuck in a corner. By now you may be drawing parallels with IT and how that sounds amazing similar, be assured there are companies out there that can help you with that, but back on track to where this blog was going, which was to an off world object, be it moon, mars, asteroid.

    You’ve probably seen the amazing Nasa mars lander photos of the surface, not to mention a few conspiracy theory add ons and alien sighting add ons, what lies below the surface is still something of a mystery. What is known is that there are caverns, and possibly lava tubes, now place yourself into theory mode, and consider how you would explore those tubes remotely.

    Seeing the picture of the rover from Nasa doesn’t do justice to its size, and simply driving into a tunnel won’t help as there’s no light for recharging, no knowing how deep you might need to drop down, and once in unlikely to be able to transmit a signal back to earth.

    Well the people, Himangshu Kalita, Steven Morad, Aaditya Ravindran, and Jekan Thangavelautham from the University of Arizona and Arizona State University have released their paper ( Path Planning and Navigation Inside Off-World Lava Tubes and Caves, 2018) on just those considerations.

    The team produced a theoretical paper with some amazing ideas on how to overcome these issues and explore the caverns. Their SphereX bouncing robots will communicate back to a base station, carrying mirror tiles, which will be laid down to bounce a laser light from the rover outside for light, communications and power, they demonstrate how serious things can really look fun, and inspire the fact that space exploration once again is having a ball.

    The algorithms and technology that they have for 3D mapping of tunnels with multi hopping robots, and their “TransFormer” strategy for recharging their robots deep inside a cave, can not be done justice in a short blog post.

    Hop over and download their paper for yourself https://arxiv.org/ftp/arxiv/papers/1803/1803.02818.pdf

    Have an inspired week ahead, everything we do here today brings us closer to being there tomorrow

    #telanovaReporter

  • How much downtime is acceptable ?

    System 36 - 30Mb Hard drive

    Many years ago, once a year, in a production factory that ran 24/7, everything would come to a halt for the Test. While the Test was in progress all orders were stopped, people hung around like they were waiting for the starting gun of a race.

    Deep inside a locked room people would be busy dismantling and inserting a loan hard drive into the System 36 and then the big restore from 12 inch disks in magazines, 1st the monthly back up, then the weekly back up, then the daily back up. After many hours and copious cups of coffee and boxes of biscuits, the system would be sent live to see whether everything had worked successfully. The downtime was costly, and that was thankfully without any live customers trying to connect online to see the status of their order.

    Fast forward to just a few years ago, and replication was the in thing. Some companies built whole triplicates of their server rooms, with a duplicate site only a few miles away, ready for raiding for parts, and another site a few hundred miles away ready for major disasters. Testing the fail-overs resulted in many issues of lost orders, lost data as people didn’t realise they were entering data onto the temporary fail-over test systems.

    Fast forward to a few of years ago, Enterprise companies were migrating to the cloud, aware of the cost, but knowing of the saving of having a system that would be always on and able to expand and reduce as the demand suited, backups are still required but testing of those can be done in a separate cloud area without disrupting the main business.

    Fast forward to today, and the costs have lowered, and now every business Small and Medium can enjoy the cloud benefits. Whether it is full in the cloud servers through Azure, or using Cloud services such as Office 365 and G-Suite.

    What comes next is open to debate, but the server-less architecture looks to be a fair bet, which will see that server you built dynamically grow or shrink depending on how busy and how much work it is doing. So you won’t be paying for power you’re not using, and you will be saving the environment.

    Stick with us and we’ll keep you up to date

    #telanovaReporter

    PS. Don't forget to try out our downtime cost calculator

  • Your teams are being targeted

    Sharks circling targeting users

    Like spearing fish in a barrel.

    The sharks are circling.

    Advances in technology have seen great benefits to humankind as a whole. Each step forward for mankind, sees an additional step forward for the criminal underground.

    Machine learning is becoming more wide spread. If your company uses Adwords, you may well be using their own machine learning on which of your adverts performs better based on the demographic and information of the person they display the advert to.

    In the past many of us will have received an email purporting to be from a bank or parcel carrier that we’ve never used. You may well have become accustomed to saying to yourself, but I don’t have a Western Union account etc.

    What now if the machine learning was reading your public social media, of you and your friends and tailoring the email or social post to match what you wanted to see. Imagine if you suddenly saw a post on your social feed that said
    yourname
    I know you went to insert place last year and I wondered if you’d seen these photos of the place insert sample image ,
    catch up soon
    insert a name of a friend

    • How closely would you look at the poster's signature
    • Would you click and check out the photos ?
    • What if it said update your Adobe Reader / Gallery Pack software when you did ?
    • Did it all seem legit ?
    • What if on the gallery page you visit theirs more social engineering, such as donate to just giving page ?
    • What would your employees and friends do ?

    In research published this month shows that by using machine learning to facilitate socially engineered phishing campaigns they are achieving a 5-14% better rate of return.

    • How does a 5-14% higher chance of breach fare with your company?
    • When and how did you last assess your risk of attack?
    • What action have you taken to reduce that risk ?
    • Are you ready for the onslaught ?
    • What actions have you already taken to upskill your employees?
    • What packages are there that can assist you ?

    Want to know more, enter your details below.

    Enter your name, telephone and email address so we can contact you
    Name :
    Email :
    Phone :

We use cookies to provide you with the best possible experience in your interactions on our website

You agree to our use of cookies on your device by continuing to use our website

I understand