Introduction 1

1.1.1 Introduction:
There are currently many assistive device and gadget available in the market to help visually impaired one to perform their routine tasks. This mainly focuses on easy accessibility for the user to use. It will a wearable jacket so the user will not have press any buttons to use it. The system will provide support to access it by voice command to perform the most tasks. The system will give a description of the surrounding environment on voice command. So, The description will help the end user to identify objects around them.
1.1.2 Problem Summary:
Existing Assistive does not have complete support for partial or full visually impaired ones. This system supports all kind of visually impaired users. And provide an easier and effective way to use it.

1.1.3 Purpose:
Our main goal is to develop a feasible and flexible wearable technology for visibly impaired people who do not want to or can pass through certain medical procedures. So, basically in order to make it the feasible whole system will be based on raspberry-pi and google cloud services. Raspberry-Pi and Google cloud services will be the most feasible solution to enhance the vision of visibly impaired people. The design of the system will support any kind of visibly impaired patient. There will be no medical procedure for the patient to undergo. Its design will be user-friendly. To operate system user will only have to give commands through voice. It will provide basic functionalities like describing the surroundings of the user environment, reading printed textual materials, voice navigation and adding people in contract via facial reorganization. Other features can also be extended to this system depending on the type of visibly of the object to the user. For example, if the user is color blind system can describe the color of the object to the user, or if the user is suffering from night blindness the system can help user by changing the camera module with special night vision lenses. This system will help visibly impaired user to perform their daily tasks with ease.
1.1.4 Scope:
The main intention of this is to enhance the sight of visibly impaired people. This system can be used to help any kind of visual impairments. This system can be modified according to their preference.
The visibly impaired user can use this jacket to travel without the help of any companion along with them. Visibly impairment person will feel more secure and confident by using this system. Will also provide some basic voice navigation to nearby facilities like ATMs, Hospitals, Schools, and Shops etc…
Many Visibly impaired people are not able to read braille. Generally, many reading materials are available in printed text format only. So, this system helps them to listen to printed textual materials.
Many visibly impaired could suffer from color blindness. In this type of impairment, the user is not able to see certain like a normal person is able to see. So this system can be set to describe the colors to the user about the object in front of them.
This system can also be further extended by adding braille printer to print whichever text file is generated by the user in braille.
Many people also suffer from night blindness. In this type of impairment, user experiences blindness in absence of daylight or low light. So, this system can also be used help this type of visibly impaired people.

1.1.5 Features:
Picture diary with voice description.

Surrounding description.

Person awareness by facial recognition.

Internet radio and podcast.

Weather information.

Printed Text material reading.

1.1.6 Modules:
Main module:
The main system module will be installed on the user-side raspberry pi. This module will work on lightweight Linux based OS. This module will handle the request of the user to perform desired tasks. A single task can contain series of operations which will be performed and handle by this main module. This module will maintain the system and manage the memory space utilization.

Image processing:
A camera module attached to the system will capture images from surroundings. All images will be stored on system memory. Google cloud vision API will be used in the background to analyze the image contents. All resultant values will be downloaded by this module. This module will calculate the approximate values from raw data. And will send keywords generated from raw data to natural language Module.

Speech recognition:
This module will receive commands from the user to perform further tasks. To save power consumption this module will only be activated by the user when the physical button is pressed which will be attached to the system. The microphone will record the voice commands from a user and feed it to Google speech API. This API service will generate textual raw data. This raw data will be used to analyze the command from a user and will help system to decide which task is to be performed.

This module will help the user to read printed textual materials. This module will be activated whenever the system will detect any textual contents in images file. This module will use Google natural language API to generate text data. This module will download those raw data. And will convert it to a speech by speech module and play it to hearing aids of the user.

GPS navigation:
Whenever user wanted to reach the certain destination of wants to save their current locations this module will be used. This module will track their (user) location and routine destinations and suggest them transport services. This will also help the user to navigate the user to certain desirable testing by voice navigation.

Status notification:
Whenever the user will puts on this wearable jacket for the first-time system will notify its user about all required notifications related to its battery indications or weather updates.

The user can also use it listen to their favorite podcast or online radios from this module. And can also add the extra features by downloading other voice-based applications to the system.

To provide more easy to use assistive aid for visually impaired ones to perform their daily routine tasks.
To make the user aware of their surroundings.

To help them summarized text materials.

To acknowledge the user about objects around them.

We investigate privacy-preservation from the anonymity aspect. The sensitive information, even after the removal of identifying attributes, is still susceptible to linking attacks by the authorized users. This problem has been studied extensively in the area of micro data publishing and privacy definitions, e.g., k-anonymity, l-diversity, and variance diversity.

So the planning is very first of all deciding the algorithms to be used for all procedure. Secondly we designed system architecture and its work flow. Then arranged all require hardware components and implemented all algorithms.

Table: 1.5.1 Plan of work
Task Months Works
Task1 Jun-July Research Analysis
Task2 Aug-Sept Design analysis, canvas, PSAR & Implementation
Software Requirement for implementing system:
Operating system: Linux
Coding Language: Python
IDE:Python 3
Cloud:Google Cloud Platform
Hardware Requirements:
Minimum Hardware Requirements For Project:
Table: 1.6.1 Hardware Requirements
SD Card 8 GB
Wi-Fi Adapter 802.11 b/g/n
Power Bank 2000 mah
Camera 2 Mp
Software Requirements:
Table1.6.1 Software Requirements
BACK END Google Cloud Platform
User Characteristics:
Users should be able to speak and understand in Hindi or English language.

Users should not be having hearing disability.

2. System Design
2.1.1 AEIOU Summery:

Fig: 2.1.1 AEIOU Summery
AEIOU canvas that describes the Activities, Environment, Interactions,Objects and users of observation site. An Activities shows the overall system actitvities we are common activities on any visually impaired. Then Environment study includes system,traffic overload,day/night, wheather etc.Interactions includes all the interaction with user like Volunteers, reader, colleages and family members etc. Objects are Walking sticks, Braille TypeWriter, Bell, Shades etc. And at last the Users are Visually Impaired, Color Blind and Tunnel Vision diabled etc.

2.1.2 EMPATHY Canvas:

Fig 2.1.2 EMPATHY Canvas
The empathy canvas describes the User that takes the experience of system that is Visually Impaired. Other stakeholders NGO, Central Government, State Government, etc. An Activities shows the overall system actitvities carried out over there that isTravelling, Story narration, Daily transactions, Book readng, pet walking, Voice acting etc.And also the stories of user that descibes the happy and sad situations.

2.1.3 IDEATION Canvas:

Fig 2.1.3 IDEATION Canvas
Ideation canvas describes the problems and solutions of the system. People include all the peoples involved in system and their activities. Then the situation/context/location describes all the possible location in which the system is used and their possible solutions that purposed to be used by the system.


The product development canvas describes the purpose of the system, the experience of the product by end user, functions of product and their features , components that is to be used to develop the system, the peoples involved in the system etc. the problems found by the customer is revalidated and redesigned.

2.2.1 Sequence Diagram:

Fig: 2.2.1 Sequence Diagram
2.2.2 Activity Diagram:

Fig: 2.2.2 Activity Diagram
2.2.3 Use Case Diagram:

Fig: 2.2.3 Use case Diagram
2.2.4 DFD Diagrams:

Fig: 2.2.4 DFD Level 0 Diagram

Fig: 2.2.4 DFD Level 1 Diagram

Fig: 2.2.4 DFD Level 2 Diagram
An implementation of all the modules has been completed that includes Object detection and Object Description, Text detection and speech Synthesis etc. And Automatic Wireless Connection and Activation of API services through Legal JSON Key.

3. Detail Design
3.1 System Analysis:
3.1.1 Study of current system:
We have taken reference of more than one system based on this idea. In those systems, we found that there should some improvements to this kind of system. Some systems used ultrasonic sensors to give a sense of obstacles to the user and also used a speech synthesizer to guide path. But it is a most inefficient way to the used ultrasonic sensor with a speech synthesizer. They could have used vibrating motors to make users aware of an obstacle in real-time. And also many systems do not provide any special support for different kinds of visual impairment.

3.1.2 Proposed System:
In this proposed system, we are using a raspberry pi with voice and image recognizing libraries to make it useful for visually impaired users. This system will be support for customization for all kinds of visually impaired users. The system will be operated by using voice commands. We will be using a single board computer to process all images and voice commands. The computing board will process all images and voice commands. This system will generate the output in the form of speech. And, will support few regional languages to provide ease of access to most of all widely regional language speaking users.

3.1.2 Proposed System:
We conducted few feasibility studies on this idea which lead us to design this system. Even this system is not the most feasible solution but covers up all the current problems. Even though in our future work of next semester we have planned some new design of the system to make it more feasible than previous one.

Requirements of New System
User Requirements
Ease of access.

Regional language support.

Customizable system output.

Fully operated by voice commands.

Object description.

Text reading.

Color detection.

Functional Requirements
Object recognition.

Text recognition.

Voice recognition.

Speech synthesize.

Non-Functional Requirements
Should process output in nearly real-time.

Should provide information regarding network speed if it takes a while to process output.

Should indication battery status through a speech synthesizer.

Should weightless as much as possible for user convenience.

4. Implementation
Should covering actual implementation and Snapshots:
4.1 System flow:

Fig 4.1.1 System flow
Component Analysis:
When the program will be executed for first time it will analyze all the required components and Services.
API Services Activation:
Will Access Local key.json file to activate API services.

User Input Command:
Will Record User Voice Command and Analyze user query for further process.

Voice Command Processing:
This module will check the result of voice command the make decision to perform desired task by user.
Speech Synthesis:
This Module will convert the results of image processing unit into speech using speech synthesizer.
4.2 Algorithm steps:
Input:Voice_Commands, Voice_Command_Rec, Image_File, JsonLabels.descirption, JsonText.description.

Output: Speech_Synthesis
Step 1: input Voice Command
Step 2: Detection of Voice Command.

Step 3: Switch Command_choice.

Step 4: case 1:Goto step 5
Case 2: goto step 6
Step 5: Capture image
Store image
Upload image to google cloud platform
Get label.description
Sort labels
Convert into audio format using speech_synthesiser
Step 6: Capture image
Store image
Upload image to google cloud platform
Get text.description
Sort labels
Convert into audio format using speech_synthesiser
Step 7:end
4.3 Snapshot:

Fig: 4.3.1 Hardware
Camera scene:

Fig: 4.3.2 Camera scene
Output in Command Prompt:

Fig: 4.3.3 Output in Command Prompt
5. Summary
We have completed our project work based on using software engineering and system analysis and design approach. Work that we have done with preplanning scheduling related with time constrains and result oriented progress in project development.

Our project starts in last week of July and completed this project at the end of September. Initially, there were some problem regarding some phases but we have planned to resolve those problems. We have divided our work in different phases and solved problems and difficulties.

5.1 Advantages
Will help a visibly impaired person to be aware of his/her surroundings.

Will help to read printed textual materials like a newspaper, books, card, notice board, Objects and etc.

Will help to detect currency note integrity.

Easy to wear and use.

5.2 Usefulness with respect to existing solutions.
In the existing system we have implement most basic necessary tasks for any visually impaired. We plan to extend the term of project to improve system performance and add more features.

5.3 Future work.
In future work, we plan to use Proximity sensors to make user aware of nearby obstacles
In future work, we plan to make define of the system more user-friendly and compact.

In the current work, system lags in its performance. So we will improve its performance.
Yash Pathak:
Chirag Thakur:
Henali Hapaliya:
Yash Pathak:

Chirag Thakur:
Henali Hapaliya: