PAGE 1
ROADMAP FOR THE INTEGRATION OF SENSING SYSTEMS IN ROBOTICS BASED MANUFACTURING OF INDUSTRIALIZED CONSTRUCTION By KUSH SHAH A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN CONSTRUCTION MANAGEMENT UNIVERSITY OF FLORIDA 2022
PAGE 2
© 2022 Kush Shah
PAGE 3
To my parents, Mrs. Lopa Shah, and Mr. Jayesh Shah for always supporting me in my endeavors
PAGE 4
4 ACKNOWLEDGMENTS I would like to express my gratitude to my chair, Dr. Aladdin Alwisy for his invaluable patience and feedback. I also could not have undertaken this journey without my defense committee, who generously provided knowledge and expertise. I am also grateful to my family, especially my parents and sister for always providing me with emotional and moral support. Their belief in me has kept my spirit and motivation high during this process. Lastly, I would be remiss in not mentioning my friends for their editin g help, feedback sessions, and emotional support .
PAGE 5
5 TABLE OF CONTENTS page ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 9 ABSTRACT ................................ ................................ ................................ ................... 10 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 12 1.1 Research Motivation ................................ ................................ ......................... 12 1.2 Research Objective ................................ ................................ ........................... 14 1.3 Research Limitation ................................ ................................ .......................... 14 1.4 Organization of Thesis ................................ ................................ ...................... 15 2 LITERATURE REVIEW ................................ ................................ .......................... 17 2.1 Current Status and Future Trends in Industrialized Construction ..................... 17 2.1.1. Benefits of Automation in Industrialized Construction ............................. 19 2.1.2. Limitations of Automation in Industrialized Construction ........................ 20 2.2. Existing Studies in Sensing Technologies ................................ ....................... 21 3 METHODOLOGY ................................ ................................ ................................ ... 24 3.1 Overview of Sensing Technologies ................................ ................................ ... 26 3.1.1 Global Positioning System (GPS) ................................ ............................ 26 3.1.2 Radio Frequency Identification (RFID) ................................ .................... 27 3.1.3 Wireless Local Area Network (WLAN) ................................ ..................... 28 3.1.4 Ultrasonic ................................ ................................ ................................ 29 3.1.5 Vision Based Sensing ................................ ................................ ............. 30 3.1.6 Light Detection and Ranging (LiDAR) ................................ ...................... 31 3.1.7 Ultra Wideband (UWB) ................................ ................................ ............ 32 3.2 Selection of Sensing System ................................ ................................ ............ 36 3.2.1 Camera Based Technique ................................ ................................ ....... 37 3.2.2 Laser Based Technique ................................ ................................ ........... 48 3.3 Factors Influencing Occlusions in Human Robot Collaboration ........................ 56 3.3.1 Characteristics Factors Influencing Sensing Technology ........................ 56 3.3.1.1 Wood types ................................ ................................ .................... 58 3.3.1.2 Human related factors ................................ ................................ .... 61 3.2.1.3 Robot types ................................ ................................ .................... 70 3.3.2 Task Driven Factors Influencing Sensing System ................................ ... 74 3.3.2.1 Human posture and angle in industrialized construction ................ 76
PAGE 6
6 3.3.2.2 Pair wise analysis of tasks and postures ................................ ....... 79 3.3.2.3 Task driven material and robot criticality ................................ ........ 87 3.4 Identification of Scenarios for RGB D Camera and LiDAR ............................... 94 3.4.1 Scenarios for RGB D Camera ................................ ................................ . 96 3.4.1.1 Only human ................................ ................................ .................... 96 3.4.1.2 Only material ................................ ................................ .................. 96 3.4.1.3 Only robot ................................ ................................ ...................... 96 3.4.1.4 Manufacturing (including humans, materials, and robots) .............. 97 3.4.2 Decision Matrix for Camera ................................ ................................ ... 102 3.4.3 Scenarios for LiDAR ................................ ................................ .............. 107 3.4.3.1 Only human ................................ ................................ .................. 108 3.4.3.2 O nly material ................................ ................................ ................ 108 3.4.3.3 Only robot ................................ ................................ .................... 109 3.4.3.4 Manufacturing (including humans, materials, and robots) ............ 109 3.4.4 Decision Matrix for LiDAR ................................ ................................ ..... 110 3.5 Guidelines for Integration of Sensing System in the Robotic Based Manufacturing of Offsite Construction ................................ ............................... 113 3.5.1 Guidelines for Camera ................................ ................................ ........... 114 3.5.1.1 Task driven guidelines ................................ ................................ . 114 3.5.1.2 Posture driven guidelines ................................ ............................. 114 3.5.1.3 Factor driven guideli nes ................................ ............................... 115 3.5.1.4 Scenario driven guidelines ................................ ........................... 115 3.5.2 Guidelines for LiDAR ................................ ................................ ............. 116 3.5.2.1 Task driven guidelines ................................ ................................ . 116 3.5.2.2 Posture driven guidelines ................................ ............................. 116 3.5.2.3 Scenario driven guidelines ................................ ........................... 117 3.5.3 Occlusion Based Framework for Camera/LiDAR ................................ .. 117 3.5.4 Case Study ................................ ................................ ............................ 118 4 CONCLUSION ................................ ................................ ................................ ...... 124 4.1 Significance of the Research ................................ ................................ .......... 125 4.2 Suggestions for Future Studies ................................ ................................ ....... 125 APPENDIX: CALCULATIONS FOR THE DECISION MATRIX ................................ ... 127 LIST OF REFERENCES ................................ ................................ ............................. 132 BIOGRAPHICAL SKETCH ................................ ................................ .......................... 157
PAGE 7
7 LIST OF TABLES Table page 3 1 Accuracy of sensors ................................ ................................ ........................... 33 3 2 Pros and cons of sensing technologies ................................ .............................. 34 3 3 RGB D cameras ................................ ................................ ................................ . 39 3 4 Summary for camera ................................ ................................ .......................... 45 3 5 LiDAR review ................................ ................................ ................................ ...... 51 3 6 Summary for LiDAR ................................ ................................ ............................ 54 3 7 Wood characteristics ................................ ................................ .......................... 59 3 8 Mean men height ................................ ................................ ................................ 63 3 9 Mean women height ................................ ................................ ........................... 63 3 10 Mean men waist ................................ ................................ ................................ . 64 3 11 Mean women waist ................................ ................................ ............................. 64 3 12 Mean shoulder width ................................ ................................ .......................... 66 3 13 Factors influencing sensing system ................................ ................................ .... 69 3 14 Different robots and its characteristics ................................ ................................ 72 3 15 Tasks in industrialized construction ................................ ................................ .... 75 3 16 Postures and their angles ................................ ................................ ................... 78 3 17 Postures in IC ................................ ................................ ................................ ..... 79 3 18 Task related postures ................................ ................................ ......................... 80 3 19 Compatible tasks ................................ ................................ ................................ 81 3 20 Description of the identified cases fr om pair wise analysis of postures .............. 83 3 21 Pair wise analysis of postures ................................ ................................ ............ 84 3 22 Material used for wall assembly ................................ ................................ .......... 89 3 23 Material criticality ................................ ................................ ................................ 90
PAGE 8
8 3 24 Robot criticality ................................ ................................ ................................ ... 93 3 25 Posture criticality for camera ................................ ................................ ............ 105 3 26 Task criticality for c amera ................................ ................................ ................. 106 3 27 Decision matrix for LiDAR ................................ ................................ ................ 112 3 28 Task criticality for LiDAR ................................ ................................ .................. 113 3 29 Small scale manufacturing tasks ................................ ................................ ...... 120 3 30 Large scale manufacturing tasks ................................ ................................ ...... 121
PAGE 9
9 LIST OF FIGURES Figure page 3 1 Framework of the methodology ................................ ................................ ......... 26 3 2 Factors influencing sensing system ................................ ................................ ... 57 3 3 Skin color chart ................................ ................................ ................................ .. 67 3 4 Typical body angles ................................ ................................ ........................... 77 3 5 Posture based occlusion in the pair wise analysis A) detailed analysis of posture 1 2, B) detailed analysis of posture 2 4, C) detailed analysis of posture 4 6 ................................ ................................ ................................ ......... 85 3 6 Scenarios for 3D camera and LiDAR ................................ ................................ 95 3 7 Example of scenarios A) task 1, B) task 3, C) task 8 ................................ ......... 98 3 8 Human visibility via camera ................................ ................................ ............... 99 3 9 Occlusion based rate of visibility Case 1 ................................ ...................... 101 3 10 Occlusion based rate of visibility Case 2 ................................ ...................... 101 3 11 Appropriate placement of LiDAR ................................ ................................ ..... 108 3 12 Human visibility through LiDAR ................................ ................................ ....... 111 3 13 Number of camera/LiDAR ................................ ................................ ............... 118 3 14 Small scale manufacturing setup ................................ ................................ ..... 120 3 15 Large scale manufacturing setup ................................ ................................ ..... 123
PAGE 10
10 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science in Construction Management ROADMAP FOR THE INTEGRATION OF SENSING SYSTEMS IN ROBOTICS BASED MANUFACTURING OF INDUSTRIALIZED CONSTRUCTION By Kush Shah December 2022 Chair: Aladdin Alwisy Cochair: Eva Agapaki Major: Construction Management Construction sites are complex and dynamic environments with frequent movement and interaction among people, machinery, and goods. These interactions, combined with the specific nature of construction projects, labor intensive jobs, and uncontrolled weathe r conditions, make construction among the least productive and most dangerous industries. Advanced building construction systems and methods, such as industrialized construction, aim to address critical productivity and safety concerns by taking advantage of cutting edge industrial and manufacturing technologies. These advanced technologies elevate the construction industry from manual, stick built practices on traditional job sites to structured working processes within controlled environments. In highly c ollaborative building jobs, however, the integration of large scale, quick, heavy duty manufacturing technologies and machines, such as industrial robotic arms, creates new safety hazards for construction workers. These industrial robotic arms have demonst rated their usefulness in manufacturing facilities; nevertheless, these advanced machines are intended to function within compartmentalized, fenced off cells and workstations. Indeed, commercially available industrial robotic arms do not possess the inherent technical and technological
PAGE 11
11 capabilities necessary to respond safely to human tasks and the associated intentional and unintended movements. The future of smart factories will facilitate seamless interac tions between human workers and robots equipped with advanced sensing systems and artificial intelligence. Those sensing technologies will allow robots to avoid collision with human employees by actively monitoring human behaviors, postures, and motions an d reactively predicting human reactions. As a result, developing a road map for incorporating sensing systems into uncertain, human centric manufacture of the industrialized building is the focus of this thesis. This study presents a systematic review of s tate of the art sensing technologies (such as 2D and 3D cameras and LiDAR) and algorithms and algorithms in order to select the best suited sensing systems for human robot collaboration (HRC) in industrialized construction . The proposed evaluation will tak e into account key factors that represent the unique characteristics of construction materials, human workers, robotic workstations, and human and robot tasks in industrialized construction. A task driven guideline will then be developed to help constructi on professionals make informed decisions regarding the integration of sensing systems in their respective, real world manufacturing lines. This research is the first step that sheds the light on the current restrictions and limitations of interactions bet ween people and industrial robots. The developed framework and guidelines will contribute to facilitating safe human robot collaboration in unfenced environments.
PAGE 12
12 CHAPTER 1 INTRODUCTION 1.1 Research Motivation The rise in population, urbanization, technology, and investments in the construction industry has resulted in a worldwide increase in demand for residential, commercial, and industrial structures. This increase in demand for buildings continues to expand the construction industry's bandwidth, which calls for the use of efficient, advanced technologies and machines, such as robotics (McKinsey & Company, 2020) . In the past ten years, breakthroughs in the manipulation, sensing, and computation of these robot ic systems have begun to make it possible to utilize robots in unstructured situations, like construction, to help with risky, exhausting, and repetitive manual tasks (Brosque et al., 2020) . A few robots have begun to break into the construction industr y. These robots are usually mounted on movable platforms to perform simple, repetitive activities on construction sites, such as demolition (Mu et al., 2022) , welding (Nagata et al., 1997) , laying bricks (Pritschow et al., 1996) , placing studs, and othe r masonry tasks (Adepoju et al., 2022) . In addition to these robotic advancements, exoskeleton is another emerging technology used in construction which can exponentially increase the strength, speed, and agility of an average worker (Ma et al., 2018) . The speed and quality of construction work can be improved using these robots. However, the integration of such robotic systems creates new safety challenges that require additional guidelines and frameworks to allow human workers to work alongside robots and combine their operations safely (Davila Delgado et al., 2019) . With the rising demand for human robot collaboration and the increasing complexity of collaborative tasks, the
PAGE 13
13 need for safety guidelines and supportive systems, such as advanced sensing systems, become an absolute necessity in order to reduce the risk of safety incidents that may lead to fatal injuries and death (Arents et al., 2021) . In the past, the manufacturing industry had to separate workers from robots to create safe working envi ronments (You et al., 2018) . However, integrating robotics in labor intensive industries necessitates close collaboration between humans and robots in order to break current productivity barriers and reach another peak in productivity and safety (Lankin et al., 2020) . Introducing robots into the human environment requires advanced control frameworks and interfaces that provide sufficient sensory feedback capable of capturing humans' intuitive actions and interactions (Brosque et al., 2020) . Construction sites, for instance, are intricate environments with various materials, tools, and equipment. As a result, correctly detecting humans from the environment and responding to the circumstance become key components of robots working alongside humans. The con struction sector may have a lot of opportunities by combining advanced systems to track the movement of workers and materials around the job site (Tang et al., 2020) . Robots should operate in a way that can detect the presence and vicinity of humans to co llaborate on construction projects. In other words, the robot should be programmed to determine how far people are from the area where they are working and then adjust their working pace if the human is close to or in contact with the robot (Story et al., 2 022) . This research aims to develop a roadmap for safe human robot collaboration (HRC) in industrialized construction by establishing guidelines and a framework for the
PAGE 14
14 selection of sensing systems that minimize or eliminate the occlusion in robotics ba sed manufacturing workstations. 1.2 Research Objective In seeking to reach the overall goal, as stated above, the following research objectives will be performed: 1. To identify and assess the most effective sensing system suitable for detecting and locating humans, material, and equipment in construction activities (e.g., LiDAR, RFIDs, Vision based, ultrasonic sensors, WLAN, Ultra wideband, etc.). 2. To identify the key characteristics and task driven factors influencing the detection and location of hu mans, materials, and robots. 3. Develop guidelines and a framework for the integration of advanced sensing systems into the framing tasks, as a representative example of robotics based manufacturing tasks of industrialized construction 1.3 Research Limitation While the research seeks to help construction, practitioners select suitable sensing systems for robotics based manufacturing, the application of established guidelines and developed framework is limited by the points mentioned below pertaining to the tasks, human crews, and materials: Tasks: The study analyzes only the framing workstation, as a representative task of the manufacturing in IC. Human Crew: A pair wise analysis of construction workers is considered in the study of different supporti ve tasks and corresponding postures. Materials: The guidelines and framework only consider the characteristic of wood studs and cement plasterboard, as the key construction materials of the framing workstation.
PAGE 15
15 1. 4 Organization of T hesis This thesis is di vided into four chapters: Introduction in Chapter 1, Literature Review in Chapter 2, Research Methodology in Chapter 3, and Conclusion and Recommendations in Chapter 4. Chapter 1 (Introduction) presents the research aim, objectives, limitations, and a gen eral study view. Chapter 2 (Literature Review) includes information from the background of the problem and recent academic studies related to the principal objective. The literature review begins with a discussion of the current status and future trends in industrialized construction followed by an investigation of the benefits and limitations of automation in industrialized construction. A summary of the proposed method is provided at the beginning of chapter 3 (Methodology). The types of wood used in con struction and their effects on sensing systems are discussed. Later, human characteristics like shoulder width, height, skin tone, ethnicity, and body postures are concerned for successfully detecting humans. Finally, robot characteristics like reach, colo r, and speed are presented. After, different tasks and postures are identified in offsite manufacturing of industrialized construction, specifically wall framing activity. Later, through those tasks and postures, different scenarios and cases are discussed from the RGB D Camera and LiDAR perspectives. These scenarios and cases must be considered to create a safe and workable environment while dealing with robots and the sensing system to support the robotic station. Finally, a guideline is developed for con struction practitioners to effectively understand how to integrate those technologies into the manufacturing line. In the last chapter, Chapter 4 (Conclusion), the outcomes of this thesis are discussed. Then, the conclusion is followed by the significance of the research and
PAGE 16
16 suggestions for further studies going after this thesis. Finally, references and appendixes can be found in the wake of the last chapter.
PAGE 17
17 CHAPTER 2 LITERATURE REVIEW 2.1 Current Status and Future Trends in Industrialized Const ruction Industrialized construction is a modification of traditional construction that employs manufacturing design and optimization techniques to address complex construction difficulties. (Qi et al., 2021a) disadvantages have been thoroughly studied. Reducing building costs, increasing productivity, and cutting down on construction time are some of the advantages of industrialized construction that have been freq uently stated (Abanda et al., 2017a; Jin et al., 2018; Yin et al., 2019) . Additionally, construction workers conduct IC tasks with a specific scope of work, which results in an increase in production efficiency and a reduction of quality issues (Ibrahim, 2009) . Through the use of industrialized construction, the construction industry may possibly save $20 billion in costs and 50% of its time each year (Bertram et al., 2019) . Utilizing digital fabrication for industrialized construction is essential for reaching this goal (Bock, 2015a) . Industrialized Construction (IC) process makes considerable use of cutting edge technology in tasks ranging from the off site prefabrication of large building components to the transformation of manual construction tasks into a seamless process of assembly and installation of these prefabricated assemblies and pieces (Alwisy et al., 2018; Industrialization of Construction , 2022) Compared to conventional construction, industrialized construction has a g reater potential for successfully adopting innovative technologies because of its factory based nature (Qi et al., 2021b) . The special, factory based characteristics of industrialized construction make it susceptible to integrating on advanced technologi es (Abanda et al., 2017b) . Industrialized construction's working
PAGE 18
18 environment frequently doesn't alter, which makes it easier to set up devices and creates an ideal setting for implementing emerging technologies (Ahn et al., 2019a) Industry 4.0 is a new trend that uses digital technologies and automation to promote the shift toward industrialized construction (Sawhney et al., 2020) . In the field of digital technologies, cyber physical systems (CPS) and the internet of things (IoT) are two examples of i nformation and communication technologies that have become more relevant due to their abilities to create smart and connected cities (Costin & Eastman, 2019). Previous studies recommend advanced sensors for monitoring the progress of the construction proce ss as well as data interchange and exchange throughout the design, construction, operations, maintenance, and decommissioning stages of a capital project (Akinci et al., 2006; Cheok & Stone, 1999; Reed, 2002; Shih, 2017) . These digital technologies enabl e a revolution in information sharing and exchange. The information integration system is becoming more prevalent in the construction business for project sector coordination, sharing of design documents, and communication. Existing Cloud based business in formation models, including Enterprise Resource Planning (ERP), and custom system integration apps can achieve effective information exchange and integration (Qi et al., 2021c) . In terms of automation, construction automation research is heavily focused on developing new techniques for gathering, processing, analyzing, and disseminating construction information (C. Balaguer, 2004) . Influenced by IC, the construction industry has been incorporating more technologies and methods from the manufacturing se ctor. Indeed, construction firms are rapidly integrating cutting edge technologies into their projects, such as digital sensors,
PAGE 19
19 intelligent machinery, and fresh software programs combined with a connected BIM platform (Autodesk, 2019) . 2.1.1. Benefits of Automation in Industrialized Construction Industrialized construction still has some drawbacks despite its advantages, such as ineffective supply chains, a lack of interdependent communication, and a lack of production and installation quality inspectio n methods (Zhang et al., 2018) . These problems might make industrialized construction less effective and prevent its spread within the construction sector. Utilizing cutting edge technology can help resolve these problems and realize industrialized const ruction's full potential. Due to its factory based character, industrialized buildings can particularly rely on sensing technologies (Abanda et al., 2017a) . The working environment in industrialized construction doesn't frequently change, which makes it easier to put up devices and creates the perfect setting for implementing developing technologies (Ahn et al., 2019b) . An industrialized construction project's management methods can be significantly improved by combining modern technology (such as IoT, sensing system, and optimization algorithms) (Costin et al., 2019) . In fact, according to (Razkenari et al., 2019) , emerging technologies could improve industrialized construction in various ways, such as by enhancing teamwork and collaboration, enhan cing information exchange and accessibility across partner companies, and enhancing productivity. Additionally, unmanned shift work can be carried out continuously with the help of automation and robotics (Anandan, 2015) . When automation and digital tec hnologies are combined, it is possible to monitor quality factors in real time and reduces mistakes. Working in a controlled and predictable setting, as is the case with prefabrication, enables greater quality control
PAGE 20
20 than is not feasible on a typical Jobs ite (Abioye et al., 2021) . Due to the factory controlled environment, better material storage and handling, increased use of automated equipment, and higher quality products may be promoted (Mohd Nasrun Mohd Nawi, 2015) . 2.1.2. Limitations of Automation in Industrialized Construction IC is still a developing construction system. While its immense potential to offer solutions to the industry's most pressing issues. There are some restrictions that must be overcome before IC can effectively adopt state of the art technologies and techniques and completely transform the sector (Autodesk, 2019) . Robots, as an example of innovative technologies, have been employed in the construction industry since the 1980s, but integration challenges persist (Bo ck, 2007, 2015b; Miroslaw Skibniewski, 1988) . This can be linked to a lack of comprehensive information on construction robots and the dangers of adopting robotic technology (Davila Delgado et al., 2019; Pan & Pan, 2020; Wuni & Shen, 2020) . According t o (GarcÃa de Soto et al., 2018) , the fragmented research on robotics applications in the construction industry hinders the business from making the necessary shift to complete automation. (Willmann et al., 2016) . Furthermore, the wide range of construc tion materials and the unique characteristics of construction projects make technology adoption in construction procedures more complex (Bock, 2007) . The use of advanced sensing systems can mitigate the challenges facing the collaboration between human w orkers and robotics in IC by enhancing the safety of human robot collaboration (L. Wang et al., 2020)
PAGE 21
21 2.2. Existing Studies in Sensing Technologies There have been studies about estimating the worker's body posture and angle in the field of industrializ ed construction. (Ray & J Teizer, 2012) established a set of guidelines to distinguish between ergonomic and nonergonomic body posture information obtained by a Microsoft Kinect RGB D cameras. Video cameras were also set up to record and study human moti ons. (Han & Lee, 2013) work provides a good illustration of worker body posture by extracting visual elements from two camera perspectives and estimating the correlation between those features. A single video camera based joint level vision based ergonom ic evaluation tool for construction workers was recently proposed by (Yu et al., 2019) . Their approach only validated the computation of four joint angles and made an assumption about the length of the worker's human body segments (elbows and knees). In addition to this, Khoury used a vision based tracking system in conjunction with a foot mounted inertia measurement unit (IMU) to monitor construction workers (Khoury et al., 2015) . Kim built a real time location tracking system for a tunnel construction site employing a wireless communication approach based on Wi Fi for the safety monitoring of employees using a single tag (Kim et al., 2011) .These studies discuss the vision based human motion capturing that can be found in industrialized construction t asks. However, consideration of human characteristics like height, waist, and shoulder size and their influence on the occlusion of collaborating workers was not done. In addition to human characteristics and postures, the color and texture of materials can affect vision based sensing systems (Gurau et al., 2013) . In past research, color has been recognized as an efficient feature for distinction of a material of inter est from the background. (Zhu & Brilakis, 2010) applied artificial neural networks
PAGE 22
22 to classify regions of concrete in images acquired on a construction site with the help of color and texture features. (Tou et al., 2007) developed a computer vision bas ed wood detection system for species wood based on texture features. Using combined color and texture information, (Akhloufi et al., 2007) have developed a novel framework for industrial product inspection (wood, roofing tiles). On the other hand, robot s that can only perform a single task, like those that lay bricks, remotely controlled or autonomous construction vehicles, and robots that use computer vision technology for surveillance, surveying, and inspection make up the autonomous machinery used in the construction industry. (Qi et al., 2021d) Industrial robots and CNC machines are given the most consideration in prefabricated and modular building, wit h uses ranging from specialized construction tasks, such spraying (Pastor et al., 2001) , to roboti c fabrication and tiling (King et al., 2014) , to robotic fabrication (Kontovourkis & Tryfonos, 2020) . In addition to these, several initiatives dealing with the creation of a new generation of autonomous road pavers and asphalt compactors have been cr eated in the field of road construction over the last few years. For autonomous navigation, these mobile devices (robots) made heavy use of GPS based tracking technologies. Furthermore, (Z. Wang et al., 2019) proposed a unique vision based strategy for a ssisting construction waste recycling robots in finding nails and screws by incorporating Faster R CNN. In case of mobile robot used in manufacturing, (Heikkilä et al., 2010) proposed the use of CCD cameras, 2D laserprofilers, or 3D cameras to be used fo r the recognition and localization of work These studies discuss robotics based automation in industrialized construction without
PAGE 23
23 considering the unique requirements fo r sensing robots in labor intensive manufacturing facilities, suitable for industrialized construction As illustrated in the above mentioned studies, existing research exploring sensing systems in manufacturing facilities and onsite construction tasks have yet to address the special needs of dynamic, close collaboration between industrial robots and human workers. Additionally, despite the significance of applying cutting edge technology to industrialized construction projects, there is a dearth of thorough research that offers a critical analysis and roadmap for human robot collaboration in the manufacturing of industrialized construction. Such research can assist in the integration of sensing systems in robotic based manufacturing of industrial constructio n. As such, there is a pressing need to develop frameworks and guidelines for the integration of sensing systems robotics based manufacturing of labor intensive industries, such as IC. The proposed research objectives, as described in the Research Objecti ve Section, will address these shortcomings in the integration of sensing systems by (1) assessing and selecting sensing systems feasible for integration with robotics based manufacturing of IC, (2) identifying the factors influencing the detection and loc ation of humans, materials and robots, and (3) developing guidelines for the integration of advanced sensing systems into the framing tasks.
PAGE 24
24 CHAPTER 3 METHODOLOGY On a global basis, several projects are now being proposed for the future of industrial sector. There is strong consensus that people should not be replaced by automation and computer systems notwithstanding the various terminologies and focuses. Instead, the new technology will operate in tandem with people to improve workplace ef ficiency, safety, and ambiance. One of the newest industrial technologies, human robot collaboration, offers the ability to precisely and carefully integrate humans and robots (L. Wang et al., 2019; X. V. Wang et al., 2020) . On the other hand, Homebuyers are placing demands on the industrialized housing market for homes that represent their distinct personal style and are specially designed to meet their needs. As a result, customers are demanding more variety (Hofman et al., 2006) . However, by departing from their regular models, builders do not want to jeopardize production efficiency (NAHB, 2004) . As such, this research seeks to assist in the selection of an effective sensing system for human robot collaboration that ensures a safe working environmen t. The proposed system will contribute to ensuring the future of human workforce in meeting the increasing demand for highly customized buildings. To accomplish the identified objectives, the research presented in this thesis follows a three step process: (1) Review of sensing technologies, (2) identification of characteristics and task based factors; and (3) development of a framework and guidelines to minimize or eliminate occlusion. First, an overview of sensing technologies that can be integrated into robotics based manufacturing will be performed. Seven major sensing technology are discussed.
PAGE 25
25 After the review, based on the accuracy, precision, advantages, and disadvantages offered by these technologies, the selection of best suited sensing systems for the robotics based manufacturing of industrialized construction will be done. The next stage is to discuss the factors that influence human rob ot collaboration. Different wood properties such as color, texture, and grain are explored, as well as human characteristics such as height, waist size, shoulder breadth, and posture, and robot attributes such as color, range, and axis speed are studied. I n addition to these characteristic factors, task driven factors such as postures and angle of worker, and a pair wise analysis of tasks and postures are performed Finally, scenarios that would influence the proposed sensing system for wall framing activit y in IC would be identified. Scenarios will be first identified for humans, materials, and robots individually, and later combining all these components would give us scenarios for manufacturing. In addition to this a framework for a decision matrix for th e selection of the number of cameras or LIDARs would be prepared that will eventually help in determining the number of sensors that would be required to detect and localize people according to the size of a manufacturing facility, the number of people wor king, and the tasks performed in workstation. The proposed decision matrix and developed guidelines for RGB D camera and LiDAR will assist construction professionals make educated judgments on incorporating sensing systems into their real world manufactur ing lines, thereby enhancing the tracking and localization, minimizing the occlusion, and improving the safety of human robot collaboration. Figure 1 summarizes the research methodology and visualizes the proposed structure and flow.
PAGE 26
26 Figure 3 1 . Framewo rk of the m ethodology 3 .1 Overview of Sensing Technologies There are several sensing technologies that can be integrated in robotics based manufacturing of industrialized construction. The identification of effective sensing systems is a key prerequisite for establishing safe human robot collaboration. As such, this section investigates existing sensing technologies in order to identify the sensing systems best suited for the dynamic nature of industrialized construction tasks. 3 .1.1 Global Po sitioning System (GPS) The Global Positioning System, a satellite based radionavigation system, is owned and operated by the US government (Maschinen et al., n.d.) , formerly known as Navstar GPS (O et al., 2019) . It is one of the global navigation sate llite systems (GNSS) that gives a GPS receiver access to geolocation and time data from four or more GPS satellites from anywhere on or near the Earth. Although these technologies can make
PAGE 27
27 GPS positioning data more useful, they do not require the user to t ransmit any data and do not require Internet or telephone reception to function ( Global Positioning System | Uses, Advantages & Disadvantages of Global Positioning System , n.d.) . It offers vital locating capabilities to users in the military, civic, and commercial sectors worldwide. Position precision is one of the most critical GPS metrics, and error has a significant impact on it. Some of the factors that affect GPS accuracy are satellite position, radio signal noise, multipath error, atmospheric condi tions that block or weaken satellite signals, inaccurate time measurements, changes in the satellite almanac that describes satellite orbital patterns, and natural barriers to the signal ( GPS.Gov: GPS Overview , n.d.; H. Zhang et al., 2013) . Numerous ind ustries, including geodesy, photogrammetry, maritime surveying, and mapping, have made substantial use of GPS. It can swiftly, precisely, and effectively give 3D coordinates, including points, lines, and planes, in any weather. (M. Zhang et al., 2017a) . GPS is generally suitable for using in outdoor environment. But While combining GPS with Dead Reckoning (DR) and Bluetooth beacons, the average error in tracking a co ncrete truck was less then 10m (Lu et al,2007). While locating an equipment with GPS in an open area, (Pradhananga & Teizer, 2013) got an error of 1.1m, but it increased up to 4.16m in presence of an obstacle. 3 .1.2 Radio Frequency Identification (RFID) RFID stands for radio frequency identification. This technique uses electromagnetic, radio, or magnetic transmission to enable the wireless identification of RFID tags (Dubendorf, 2003; Kubitz et al., 1997) . An interrogator/reader and several tags are c onnected wirelessly in a typical system. Different techniques enable the
PAGE 28
28 detection of tags, their precise identification, or even read/write access to the internal memory of the tag. Some objects can transmit energy due to electro magnetic transmission. Ta gs, readers, and antennae make up RFID. It is frequently employed in the construction safety management because it can precisely find one or more targets in a static or dynamic indoor environment (M. Zhang et al., 2017b) . For the 2D positioning of the ob ject an average error of 3.7m was found (Gu et al., 2009a; Haas, 2006) . In an experiment carried out by Razavi and Moselhi in an open environment, the positioning error was around 1.3m (Magar, 2021) . The accuracy of RFIDs can be further improved by comb ining it with some other sensing technology and implementing algorithms to it (Landaluce et al., 2020) . 3 .1.3 Wireless Local Area Network (WLAN) WLAN is a system for data transmission that makes use of RF technology. Within the range of wireless signals, WLANs can connect to the network from any point and determine the target's location based on the strength of the detected signal. The targets must be inside the signal coverage region in order to use the WLAN system for positioning and the deployment of wireless signal transmitters. Because of this, WLAN use is restricted in complex and dynamic construction sites (M. Zhang et al., 2017c) . In actua lity, impediments may block or even reflect electromagnetic signals, which would ultimately reduce WLAN location accuracy and impede WLAN development in building sites. In a lab test, it demonstrated positioning using WLAN with an average inaccuracy of 2m (Khoury & Kamat, 2009) . In addition, an experiment found that the positioning error for static targets varied from 1.5 m to 4.57 m with a credibility level of 95% and was roughly 7.62 m for dynamic targets (Taneja et al., 2011) .
PAGE 29
29 3 .1.4 Ultrasonic An ultr asonic sensor is a technology that uses ultrasonic sound waves to detect the distance to a target object and then transforms the sound that is reflected into an electrical signal (Apogeeweb, 2021) . To calculate the distance between an object and the sens or, the sensor measures the amount of time between the transmitter's sound emission and its contact with the receiver. Most often, proximity sensors are combined with ultrasonic sensors. Ultrasonic sensors are used in both manufacturing technologies and ro bot obstacle detection systems. Robotics uses ultrasonic sensors to move away from any obstacles in their path and toward the target region. Additionally, ultrasonic sensors are used to locate the substantial obstruction and gather data about the distance between the robot and obstruction. They aid robots with object detection, tracking, and position detection to prevent collisions and guarantee that tasks are completed without difficulty ( Ultrasonic Sensors in Robotics | Into Robotics , 2013) . The target is located using the triangle location method, and the distance between the measured point and a fixed point is determined using sound speed and transfer time (M. Zhang et al., 2017d) . The accuracy can usually be measured in centimeters, and the technolog y is well established and inexpensive (M. Zhang et al., 2017e) . However, because of its rapid attenuation in air, ultrasound cannot pass through walls and is frequently affected by reflected signals from metal objects. Its transition distance is also limi ted. As a backup technique, RF and ultrasound can be used to provide positioning information. The positioning error was 10 cm, and the orientation accuracy was 3 degrees, according to tests (Priyantha, 2001) . One
PAGE 30
30 experiment involved positioning a static object indoors, and an average positioning inaccuracy of 3 cm was noted (Maalek & Sadeghpour, 2013) . 3 .1.5 Vision Based Sensing Imaging sensors are used in vision based sensing to acquire photos or videos. The data is then evaluated using algorithms in order to observe and comprehend the surrounding space (M. Zhang et al., 2017f) . The target does not need to carry any devices while using vision based sensing. The technology itself is capable of meeting positioning requirements across a vas t area (Fukuda et al., 2010) . Vision based sensing, on the other hand, is susceptible to the effects of the surrounding environment, such as lighting and background colour (Gu et al., 2009a) . (Park et al., 2011) carried out an experiment for tracking of construction resources on site camera system and did some comparative studies of vision tracking methods. Technology advancements and the creation of new algorithms enhanced object identification with a 0.67 s time lag and 99% accuracy for detecting wor kers wearing safety vests (Heng et al., 2016) . In an experiment using the vision based positioning to capture unsafe work behaviour conducted by (T. H. Lee & Han, 2013a) , it detected 88% of all the identified unsafe behaviours. In addition to monitoring the location of resources on a wide scale, vision tracking can be used to perform more efficiently at congested, outdoor sites because it does not require any pre tagging of resources. Multiple entities can be tracked if they are visible in camera views. When the information in the video data is insufficient, it fails to track entities. For example, it does not provide the location of obscured entities (no information) thu s is unsuitable for nighttime monitoring (limited information). It does,
PAGE 31
31 however, offer the advantage of being able to track a huge number of things without the use of tags, which is useful for tracking in large scale, congested sites (Brilakis, 2012) . 3 . 1.6 Light Detection and Ranging (LiDAR) A common technique for determining an object's precise distance on the surface of the earth is LiDAR, or light detection and ranging. Even though it was originally utilized in the 1960s when laser scanners were affix ed to airplanes, it took 20 years for LiDAR to achieve the kind of popularity it deserved. It wasn't until the advent of GPS in the 1980s that it gained popularity as a tool for producing precise geographical measurements. We ought to be better knowledgeab le about LiDAR mapping technology and how it functions now that its application has spread to a variety of fields (B. Sharma, 2021) . Robots are helped to navigate their surroundings by LiDAR (light detection and ranging) technology through object percept ion, identification, and collision avoidance. Real time information on the robot's surrounds, including walls, doors, people, and other objects, is provided via LiDAR sensors. LiDAR can help robots perform a variety of jobs and run autonomously. An operato r can work adjacent to a robotic arm by adding LiDAR, which will establish a safe space surrounding the robot. Pick and place tasks benefit from the usage of robotic arms ( LiDAR Sensors for Robotic Systems | Mapix Technologies , n.d.) . LiDAR may measure an obstacle's distance and assist in creating a grid map that shows the structure and obstructions on the robot's running plane (G. Jiang et al., 2019) . Combining vision sensors with LiDAR will also help building the environment map which can become another hotspot for robot navigation. In its area of vision, an RGB D camera can provide both color and depth information. It is the most effective sensor for
PAGE 32
32 creating an entire 3D scene map (Yin et al., 2019) . In an experiment for relocation of consumer robot b y (Jin et al., 2019) , he combined low cost LiDAR and camera. This system ultimately creates a 2.5D map which helps the robot for fast re localization. This proposed method had 95% success rate (with initial pose given to the robot) and 92% success rate ( without giving initial pose). 3 .1.7 Ultra W ideband (UWB) A real time location system (RTLS) must meet strict quality requirements and offer a feature set tailored to the industrial environment's high reliability and performance requirements for precise ind oor tracking (Sewio, n.d.) . The accuracy, scalability, and reliability required in process planned production, where the cost of every minute matters, can be achieved with ultra wideband (UWB). Even in hostile metallic surroundings, UWB reliably offers a 30 cm accuracy (Ridolfi et al., 2018) . The use of UWB technology on building sites has practical as well as technological ramifications. The key concern from a practical standpoint is the operational influence on site layout. To transfer data and maintain receivers and central units in sync, UWB requires CAT5 cables. Laying CAT5 cables on a building site is typically a difficult task because the harsh and constantly evolving environment requires receivers to be placed along boundaries, minimizing crossing paths as much as possible (Giretti et al., 2012) . The operating range, accuracy, and reliability of position monitoring of material/workers progressing at normal transport/walking speed through eventually hindered non line of sight situations are the pr imary concerns addressing the use of UWB position tracking to construction sites (Giretti et al., 2012) .
PAGE 33
33 Table 3 1 shows the summary of different sensing system and their accuracy according to previous research. Table 3 1. Accuracy of s ensors Real time locating system Accuracy from Publications (Best Results) Author Global Positioning System (GPS) 1.1 1.16m (Pradhananga & Teizer, 2013) Radio Frequency Identification (RFID) 1.3m Razvi et al; Moslhi et al, 2012 Wireless Local Area Network (WLAN) 1.5 4.57m (Khoury & Kamat, n.d.; Taneja et al., 2011) , Ultrasonic 3 5cm (Maalek & Sadeghpour, 2013) Vision Based Sensing 0.65m (Gu et al., 2009a) Light Detection and Ranging (LiDAR) 3cm Zhang et al; Wang etl al; Fan et al, n.a. Ultra wideband (UWB) 0.15 0.3m (Cheng et al., 2013; Gu et al., 2009b)
PAGE 34
34 Table 3 2 summarizes the sensing system discussed above. This summary about sensing system would help us choose a sensor or a fusion/combination of sensors which can be implemented in robotics based manufacturing of industrialized construction. Table 3 2. Pros and c ons of s ensing t echnologies Sensing System Pros Cons Author Global Positioning System Provides 3D coordinates accurately in all weather Freely accessible to anyone with GPS receiver Can be combined with GSM for object tracking well in indoor spaces with obstacles Error ranges from 1.1m 4.16m in presence of obstacle Not capable of object detection (Damani et al., 2015; H. M. Khoury & Kamat, n.d.; Pradhananga & Teizer, 2013) Radio Frequency Identification Tags are lest sensitiv e to adverse conditions Many tags can be read simultaneously and can be combined with other sensors RFID readings gathered from real world are noisy. Detection failure of tags is normal Deploying RFID on large scale is challenging (Kaur et al., 2011; Seol et al., 2017; L. Yang et al., 2015) Wireless Local Area Network The positioning system can be implemented simply in software. Reported accuracy is 1 3m Only provides 2D location information (Batistic & Tomic, 2018; Khoury & Kamat, n.d.; D. Zhang et al., 2010) Ultrasonic Works well in indoor spaces Average accuracy is 3 5cm New infrastructure, in form of sensors and transmitters, is needed in every room where the system is used. Large number of receiver needs to be deployed (Koyuncu et al., 2010; Maalek & Sadeghpour, 2013; M. Zhang et al., 2017g)
PAGE 35
35 Table 3 2 . Continued Sensing System Pros Cons Author Vision based Sensing Convenient for acquiring color & depth information Textures & contour of objects and features can be extracted by 2D object detection techniques Can be combined with indoor depth sensors like LiDAR High resolution images can help to distinguish between categories that are very similar in geometry Background noise can create real time computational complexity Occlusion in image can lead to inaccurate object detection Susceptible to the effects of the surrounding environment, such as lighting and background color (Kim et al., 2018; Wang et al., 2021; Gu et al, 2009) Light Detection and Ranging LiDAR point cloud can provide absolute depth and scale Can accurately ac quire the shape and posture of detected object Lack texture and dense information point clouds becomes increasingly sparse with increasing distance from the scanning center (Z. Zhang et al., 2022; Zhao et al., 2020) Ultra wideband Low cost and power IC processes Extremely fine time & range solution even through opaque media Can interfere with nearby system that operates in UWB spectrum due to misconfiguration (Alarifi et al., 2016)
PAGE 36
36 3.2 Selection of Sensing System The previous section discusses in depth the uses, accuracy, advantages, and downsides of sensing systems that can be integrated in robotic bases manufacturing of industrialized construction for the aim of safely collaborating humans and robots. As a result, a suitable sensing system for detecting and identifying workers, materials, and equipment amid the complex nature of industrialized building activities must be chosen. GPS is generally suitable for outdoor environment and its accuracy depend on several fact ors. However, when it comes to using it in indoor spaces with some or multiple targets in static or dynamic indoor environment. On the downside implementation of RFI D would become a tedious task while working with several workers as RFID consist of tags which needs to be provided to each worker, readers which needs to be attached in several places inside the manufacturing plant and antennas needs to be placed. In addi tion to this, the positioning accuracy of RFID ranges from 1.3m 3.7m (M. Zhang et al., 2017b, Song et al,2006; Gu et al,2009, Razvi et al; Moslhi et al, 2012) . Since it provides precise 3D position values in real time, UWB sensing has an advantage over R FID sensing. The primary barrier to UWB adoption at this time is a required measurement infrastructure, but once established, can last for at least the project's duration. For a safe human robot collaboration more accurate sensing system needs to be implem ented. Similarly, the application of WLAN in a dynamic and intricate building site is limited, but it can be utilized for locating in a laboratory experiment. However, WLAN accuracy ranges from 1.5m to 7.62m (Khoury & Kamat, n.d.; Taneja et al., 2011) , which
PAGE 37
37 is insufficient for HRC. Furthermore, sensing systems such as GPS, RFID, and WLAN lack the ability to recognize objects, which is critical for human robot coexistence. It is convenient to acquire color, texture, and contour information of an item whe n using vision based sensing. A high resolution vision based sensor would also be capable of reliably distinguishing between geometrically related categories. LiDAR, on the other hand, can precisely determine the detected object's depth, scale, shape, and posture. LiDAR can effectively be included into a robot to explore its surroundings and measure the distance between obstacles. Considering the above stated analyses of sensing technolgoies, the use of vision based sensors or LiDAR can be easily implemente d in a robotics based manufacturing environment and can precisely recognize and localize different components such as humans, material, and robot in industrialized construction Following subsection discusses in detail about different consumer Cameras and LiDAR which can be implemented in industrialized construction. 3.2.1 Camera Based Technique Visual object category and instance detection have advanced quickly over the past decade due to the availability of public picture repositories and recognition ben chmarks. The RGB D camera, a new generation of sensor technologies that can produce high quality synchronized videos in both color and depth, is emerging right now (Silberman et al., 2012) . A number of downstream real world applications rely on 3D object detection, which is a crucial component of environmental perception systems and one of the most fundamental jobs in comprehending the 3D visual world. RGB D photographs comprise depth information representing the geometry of space, object texture, and sema ntic information. (Y. Wang et al., 2021b) .
PAGE 38
38 Since inexpensive depth cameras like the Kinect, Xtion, and TOF cameras have become more popular, the quality of depth maps has significantly improved when compared to stereo rigs. Studies using these sensors ha ve shown how useful depth cameras are for dealing with severe human occlusions and complicated backgrounds, both in terms of accuracy and efficiency. Depth cameras are frequently positioned either horizontally at human eye level or vertically overhead in e xisting works. Along with the principles of depth measuring, Intel has unveiled a fresh solid state LiDAR depth technology intended for indoor applications. Applications that need high resolution (Robu.in, n.d.) , high accuracy depth data can use the Inte l RealSense LiDAR Camera L515. L515's optimal range is 0.25 to 9 meters, and its depth accuracy is roughly 5 to 14 millimeters (Intel, n.d.) . Understanding the accuracy and precision of RGB D cameras is crucial if they are to be used for measurement task s, particularly for mapping applications. The effectiveness of RGB D sensors in interior settings has been examined in a number of studies (Halmetschlager Funek et al., 2019; Lourenco & Araujo, 2021a; Ulrich et al., 2020) . In an indoor setting with ranges up to 2 m (Halmetschlager Funek et al., 2019) tested 10 depth cameras for bias, precision, lateral noise, various lighting conditions and materials, and varied sensor setups. In his research on face analysis, (Ulrich et al., 2020) investigated va rious 3D camera systems and graded them according to how well they applied to recognition, identification, and other use cases. Table 3 3 compares nine distinct types of RGB D cameras that have been utilized in the past for object recognition in the roboti cs, agricultural, and medical fields.
PAGE 39
39 Table 3 3. RGB D c ameras Camera Principle Range(m) Error/mm Resolution Author Xtion PRO Live SL 0.8 3.5 1280 × 1024 (Kundu et al., 2016; Migniot & Ababsa, n.d.) RealSense F200 SL 0.2 1.2 1% 1920 × 1080 (Qian et al., 2018; Sharma &Valles, 2020; ten Harkel et al., 2017) RealSense SR300 SL 0.3 2.0 1% 1920 × 1080 (Liao et al., 2018; C. M. Lin et al., 2018; Schwarz et al., 2017; Song et al., 2019) CamBoard Pico Monstar ToF 0.5 6.0 <1% 352 × 287 (Fu et al., 2020a; Zhou et al., 2021; Zoghlami et al., 2021) Kinect v2 ToF 0.4 4.5 <1% 1920 × 1080 (Bhateja et al., 2021; Kulkarni et al., 2021; Mehdi et al., 2021; Pham et al., 2017; Tölgyessy et al., 2021; Xu et al., 2019; Y. Yang et al., 2015 )
PAGE 40
40 Table 3 3. Continued Camera Principle Range Error/mm Resolution Author CamBoard Pico Flex ToF 0.1 4.0 <6mm (Condotta et al., 2020; Elena Maria BARALIS Ing Andrea, n.d.; Grenzdorffer et al., 2020; Mutiara Sari et al., 2022; Novkovic et al., 2020) RealSense D415 AIRS 0.16 10 <2% 1920 × 1080 (Andriyanov, 2022; Andriyanov et al., 2022; Servi et al., 2021a; Tadic et al., 2019) RealSense D435 AIRS 0.2 10 <2% 1920 × 1080 (Lecrosnier et al., 2020; Nebiker et al., 2021; Rakhimkul et al., 2019; Shin et al., 2019)
PAGE 41
41 Table 3 3 . Continued Camera Principle Range Error/mm Resolution Author RealSense L515 LiDAR 0.25 9.0 5mm 14mm 1920 × 1080 (Breitbarth et al., 2021a; Horcajo De La Cruz, 2021; Joo et al., 2021; Mazhar et al., 2021; Object Classication, Detection and State Estimation Using YOLO v3 Deep Neural Network and Sensor Fusion of Stereo Camera and LiDAR ProQuest , n.d.)
PAGE 42
42 In order to verify the optical CMS's alleged performance claims given by the manufacturer, the ISO 10360 13 standard, published in 2021, provides acceptance and reverification testing. This standard is a major development in the area of 3D optical system metrological characterization, which is still unrecognized by any international standard. The ISO 10360 13:2021 standard defines the execution of four separate tests: the volumetric length measurement error in concatenated measuring volume, distortion error, flat form distortion, and probing error (split into probing size and probing form dispersion). RealSense D415, D455, and L515 were put to the test by Servi et al., 2021b in compliance with ISO 10360 13 standard. Overall, the testing rev ealed the advantages and disadvantages of each device. The D415 device demonstrated superior reconstruction quality in tests that were solely focused on the short range (calibrated sphere and 3D reconstruction). The L515 device shows a greater capacity to depict planar surfaces when tested for systematic depth errors, whereas the D455 device showed a better ability when tested for standard deviation. From the study shown above and Table 3 3 RealSense L515 shows good performance in an indoor environment in t erms of error of depth, range, precision, and resolution. In addition to this L515 has been specifically designed for indoor application which can handle occlusion and identify complex objects. (Lourenco & Araujo, 2021b) conducted an experimental analysis and comparison of the depth estimation performed by the RGB D cameras SR305, D415, and L515 from the Intel RealSense product family. Structured light projection, active stereoscopy, and ToF are the three different depth d etecting techniques used by these three cameras.
PAGE 43
43 The authors used a regulated and stable lighting environment indoors to assess the cameras' functionality, accuracy, and precision. In comparison to the other two in their trial setting, the L515's use of so lid state LiDAR ToF technology gave more exact and accurate results (Lehtola et al., 2022) . The RGB D camera's ability to simultaneously capture RGB and depth data has lately gained significant research interest in rigid robot position estimation. Althou gh a lot of work has been achieved, some problems, such as position estimation in scenes without textures or structures, remain unresolved (Yu et al., 2019a) . There are numerous issues with vision based approaches that have not yet been resolved. The esti mation result's lack of scale is the first issue. In general, unless an additional sensor, such as an Inertial Measurement (IMU) (Huang, 2019) , is included, single camera based posture estimation algorithms cannot create a transformation matrix with an ab solute scale. Since the stereo posture estimation approach requires at least two cameras and uses stereo parallax relations to calculate depth, the system cost is increased (X. Lin et al., 2021; Mouats et al., 2015) . The separation of the two cameras is another restriction on this technique. The estimation accuracy for far off objects is low if this distance is modest. In recent years, RGB D cameras (like Microsoft Kinect and Intel RealSense) have appeared. These cameras simultaneously record RGB images a nd depth data. Consequently, these cameras offer a potent remedy for the scale issue (Yu et al., 2019b) . Table 3 4 below discusses some of the advantages and disadvantages of RGB D cameras discussed in Table 3 3 . This summary table and previous literatur e would
PAGE 44
44 help us identify an efficient RGB D camera for the purpose of robotics based manufacturing of industrialized construction .
PAGE 45
45 Table 3 4. Summary for c amera Camera Advantages Disadvantages Author Xtion PRO Live Compatible with different operating system and ROS Physically smaller Good fit to achieve distance information Skeleton tracking SDK is discontinuing (Chong et al., 2015; Pavón Pulido et al., 2020) RealSense F200 Lower price High frame rate of 60fps SDK has a build in model of human hand Short range of of 1.2m Higher occlusion influences the success rate if recognition (Fu et al., 2020b; Vilaça et al., 2017) RealSense SR300 Low cost sensor Sensor is handheld Short range of 2.0m Does not support large scanning volumes (Fu et al., 2020b; Milan et al., n.d.) CamBoard Pico Monstar Can be powered entirely from the USB 3.0 Royal Software Suite SDK is included with this camera sensor Supports interaction through C++, Python, OpenCV, OpenNI2, MATLAB, ROS, DotNet In presence of sunlight, sensor shows reduced performance Not updated with Ubuntu or ROS (Payne, 2020; Steinbaeck et al., 2018) Kinect v2 Has the capability to sense depth, capture color images, emit infrared rays, and input audio Can achieve accurate depth information Uses ToF for depth sensing and is only compatible under windows Requires separate ACDC power supply (Chong et al., 2015; Eric & Jang, 2017; Pavón Pulido et al., 2020)
PAGE 46
46 Table 3 4 . Continued Camera Advantages Disadvantages Author CamBoard Pico Flex Capable of producing result with very little noise outdoors It is also not affected by background light, so it is a very robust 2D image in every light condition Short range of 4m Low sensor resolution and the low reflectance of the black checkerboard patches (Fuersattel et al., 2017; Raju & Sazonov, 2022; Steich et al., 2016) RealSense D415 Has high contrast so that spots can be observed even in bright room Has programmable dot density Li ttle or no laser speckle which deteriorates the depth performance by>30% (Grunnet Jepsen et al., n.d.) RealSense D435 Ability to redistribute power Has a texture which is scale invariant which means it will show structure at many different ranges Fails in sunlight Image distortion and pixel loss for distant object (Grunnet Jepsen et al., n.d.; Neupane et al., 2021)
PAGE 47
47 Table 3 4 . Continued Camera Advantages Disadvantages Author RealSense L515 Several successful tests are conducted with glossy or partially transparent surfaces as well as with human skin s hows applicability for human machine interaction The depth sensor allows working with two different resolutions, which also enables a different depth measurement range. Intel RealSense Viewer SDK is open source and is platform independent Software interface available for Python, Matlab, node.js, LabVIEW, OpenCV and PCL High color image resolution gives higher accuracy Because of high res olution, the frame rate is slow Warp and Cluster failure can be seen during marker recognition (Breitbarth et al., 2021b; Sarmadi et al., 2021; Xie et al., 2022)
PAGE 48
48 From the above discussed tests on RGB D cameras by previous researchers, and pros and cons mentioned in Table 3 4 , RealSense L515 has an added advantage due to its accuracy, adaptability with different programming languages, low cost, and applicability for human robot interaction. Its additional depth sensor, open source SDK, high resolution, and other specifications make it a cost efficient and reliable sensing device. 3.2.2 Laser Based Technique As discussed above LIDAR is an acronym for Light Detection and Ranging. It is a virtual vision tool used to examine the outside of the earth. It falls under the Time of Flight (ToF) sensor category (Kolakowski et al., 2022) . By directing a laser at the i tem and recording the travel time, LiDAR calculates the object's distance (H. Wang et al., 2016) . The exact equation for calculating the distance a returning light particle travels to and from an item is (Deans & Hebert, 2001) D=(SL×FT)/2 ( 3 1 ) where FT is flight time, SL is the speed of light, and D is distance (Khan et al., 2021) . This makes it easier to determine exact distances to land points, heights, and the locations of objects that are on the ground (Dissanayake et al., 2011) . As well as evaluating the shape and effectiveness of structures, the LiDAR equation (2) is utilized to quantify individual estimates of the atmosphere. PR=SpCf×GR×PT²×TB ( 3 2) 2D LiDAR sensors record X and Y parameters using a single axis of beams (Dissanayake et al., 2011) . Similar to their 2D counterparts, 3D LiDAR sensors
PAGE 49
49 operate, but additiona l measurements near the Z axis are required to provide genuine 3D data. Usually, many lasers are projected at different angles or longitudinally to capture data from the third axis. Although 3D LiDAR sensors are substantially more expensive than 2D ones, t hey have superior precision and resolution. The visualization and in depth examination of technological structures like bend radius are perfect applications for 3D LiDAR. Wuhan University experimented with a Velodyne 64E laser scanner installed on a self driving car (Smart V II). An automobile continued to be in front of it, creating an obstruction in each frame's point cloud. It was unable to abstract additional cars from the point cloud at these frames because there were so few laser points reflecting fr om them when they entered the occlusion (L. Zhang et al., 2013) . LIDAR can estimate an object's precise distance from it, however it is challenging to classify the object. Furthermore, it is well recognized that when humans are partially obscured, it is very challenging to detect them due to a lack of data to do so (Kwon et al., 2016) . Even in a scene that is entirely static, occlusion and viewpoint alterations provide the impression of dynamic behaviors. Due to this uncertainty, it is challenging to re liably detect real dynamic objects without raising the risk of false alarms. Research on the challenge of detecting and following several moving targets has been ongoing for decades. Early efforts were concentrated on tracking discontinuous point like targ ets, but it soon became clear that the difficulty was in correctly associating noisy measurements with object tracks (D. Z. Wang et al., 2015a) .
PAGE 50
50 Table 3 5 shows different LiDAR models from Velodyne, Sick, and Intel which are being used in detecting, clas sifying, and tracking of human in real time. These LiDAR are also used in the field of autonomous cars for detection of pedestrians, signals and various other objects on road.
PAGE 51
51 Table 3 5 . LiDAR r eview Model FOV (H X V) Max Range(m) Accuracy Citation Velodyne HDL 64E 360°×26.9° 120 2cm (Azim & Aycard, 2012; Bai et al., 2022; Borcs et al., 2013; Halterman & Bruch, 2010; Minemura et al., 2018) HDL 32E 360°×41.3° 100 2cm (Beltrán et al., 2018; R. Wang et al., 2021; Ye et al., 2016; W. Zhang et al., 2018) Puck 360°×30° 100 3cm (Bu et al., 2020; J. Lin & Zhang, 2020a, 2020b; Yoon et al., 2020) Sick LD MRS400001 110°×3.2° 150 10cm ( Information Fusion (FUSION), 2014 17th International Conference On. , n.d.; Simultaneous Tracking and Shape Estimation with Laser Scanners | IEEE Conference Publication | IEEE Xplore , n.d.; C. Jiang et al., 2022; Pereira et al., 2016) LD MRS 110°×3.2° 150 10cm (Aryal, 2018; Aryal & Baine, 2019; K. Cho et al., 2012)
PAGE 52
52 Table 3 5 . Continued Model FOV (H X V) Max Range Accuracy Citation Intel RealSense L515 (with camera) 70° × 55° 9 ~5mm ~15mm (Breitbarth et al., 2021a; Horcajo De La Cruz, 2021; Joo et al., 2021; Mazhar et al., 2021; Object Classication, Detection and State Estimation Using YOLO v3 Deep Neural Network and Sensor Fusion of Stereo Camera and LiDAR ProQuest , n.d.)
PAGE 53
53 The Intel RealSense L515 is one of the smallest high resolution systems and has undergone extensive testing in compliance with VDI/VDE Guideline 2634. Furthermore, research was done on human skin as well as glossy or semi transparent surface s (acrylic glass, carbon fiber, and aluminum). Its application in human machine interactions is seen in the later (RealSense, 2020) . As a project to design an autonomous self driven automobile, HDL 64E has been selected for Google's fleet of robotic Toyot a Priuses (Guizzo, 2011) . Also, Velodyne HDL 32E (Chan et al., 2013) was employed by University of Michigan to create a dataset through University of Michigan's North Campus (Carlevaris Bianco et al., 2016) For long term autonomous cars, this dataset p rovided pictures, LiDAR, GPS, and INS ground truth. A dataset was developed by the University of Oxford (Maddern et al., 2016) using Sick LiDAR. It consisted of around 1.01 × 10 5 km trajectories through central Oxford. Sick LiDAR was able to capture long term dataset and was successful in providing images, LiDAR point clouds, GPS and INS ground truth for autonomous vehicles (Bello et al., 2020) . Table 3 6 below talks about some of the advantages and disadvantages of different LiDAR discussed in Table 3 5. This summary table and previous literature would help us identify an efficient LiDAR for the purpose of robotics based manufacturing of industrialized construction .
PAGE 54
54 Table 3 6 . Summary for LiDAR LiDAR Advantages Disadvantages Author HDL 64E High accuracy of around 98.91% surrounding Very expensive (~$75,000) Can only produce sparser point clouds (Gao et al., 2018; Moosmann & Stiller, 2011; T. E. Wu et al., 2018) HDL 32E surrounding Enhanced navigation performance during the local path planning phase in terms of richer 3D surrounding environmental map, higher efficiency, longer range Exhibit temporal instability within the first hour of operation It is mono fiber, it has t he disadvantage of hitting the objects from a single point of view, so there are many occlusions (Kelly et al., 2022; Roynard et al., 2018) Puck Range accuracy of 2cm can be achieved Has a high scanning range suitable for use with small UAVs for obstacle detection and avoidance Hardware of the laser scanner alone currently costs over $4000 Low level accuracy in estimating (Lou et al., 2018; Moffatt et al., 2020; T. Wu et al., 2020) LD MRS400001 It can work in harsh conditions like snow, rain, and dust sensor adopted a four line design to simultaneously emit four laser beams to form four stacked planes, with a scanning interval angle of 0.8° Expensive sensor (~$30,000) Has high scanning sensitivity for objects with transparent properties ( LD MRS400001 , n.d.; SICK LD MRS | AutonomouStuff , n.d.; L. Li et al., 2022)
PAGE 55
55 Table 3 6 . Continued LIDAR Advantages Disadvantages Author LD MRS Provides a multi layer scanning native tracking system clusters each incoming scan and keeps track of every cluster, it makes no distinction between static and dynamic objects Low range accuracy (Chachich et al., 2015; D. Z. Wang et al., 2015b) RealSense L515 Laser meets the standards of laser class 1 according to DIN EN/IEC 60825 1, 2014 Can be easily mounted on a tripod Has a good indoor performance in terms of accuracy As a depth sensor, it can work week in separating people in the foreground from their background Can be used for moving scenes without any problem Significant difference in depth can be seen with change in type of illumination Value of depth unit is not modifiable and is se to 0.00025 Error might increase at a close range (100 500mm) (Breitbarth et al., 2021c; Servi et al., 2021b)
PAGE 56
56 Considering the previous use of different LiDAR devices gives us a thorough knowledge about the advantages and drawbacks of these laser based sensors. Hence, from the above discussed tests and datasets obtained from LiDAR by previous researchers, and pros and cons mentioned in Table 3 6, RealSense L515 LiDAR would be a reliable sensor because of its integrated depth camera, it brings additional level of precision and accuracy over its entire operation. 3.3 Factors I nfluencing Occlusions in Human Robot Coll aboratio n The detection of important real world items, such as humans, materials, and robots performing different tasks in robotics based manufacturing of industrialized construction needs to be accurately detected and localized. However, there are still s ome challenges sensing system faces. The section below discusses the Characteristics Factors and Task driven Factors that would influence the sensing technology. 3.3.1 Characteristics Factors Influencing Sensing Technology Important components of robotics based manufacturing such as humans, materials, and robots have different characteristics like color, texture, shape, and size. The backgrounds against which these things are placed in construction setup are intricate and dense, and factors like lighting, size, and object count make them more challenging to be accurately represented (Papageorgiou & Poggio, 2000) . Initial object detection can be facilitated by using surface information, such as color and texture, along with shape information (Gibson, 1979) . In fact, according to a study (Davidoff & Ostergaard, 2007) , properly colored things are identified more quickly than monochrome ones.
PAGE 57
57 These varying characteristics necessitate the study of human characteristics such as height, waist size, shoulder wi dth, and skin tone; the wood of various color tones and grain arrangement used in construction; robots of various colors, textures, speeds, and ranges; and different human postures in construction. Taking these aspects into consideration, the sensing syste m would be able to precisely simulate all possible shapes, postures, colors, and textures of an item which can ultimately lead to a robust sensing system. Figure 3 summarizes the things which would influence the sensing system . Figure 3 2. Factors i nflue ncing sensing system
PAGE 58
58 3. 3 .1.1 Wood t ypes Wood is non uniform and inconsistent by nature, resulting in it containing flaws that affect the structural qualities as well as the appearance of the timber made from the wood material. Enough has been written about image segmentation techniques used to d etect flaws in wood (Funck et al., 2003; Niskanen et al., n.d.) . These studies show that color descriptors have a significant impact on a variety of visual inspection tasks. Because of the irregularly colored surfaces, texture measurements are required f or a range of further operations. Color and texture must be integrated to achieve maximum performance in a variety of applications (Mäenpää et al., 2003) . Individual pieces of lumber vary greatly in quality and appearance when it comes to knots, grain slopes, shakes, and other natural traits. As a result, they have a diverse set of strengths, uses, and values. Table 3 7 summarizes the wood characteristics and variations which can be seen for hardwood and softwood used in construction. In addition to this, the Description column discusses the properties of wood(e.g., change in color or change in texture) which may influence the sensing system.
PAGE 59
59 Table 3 7 . Wood characteristics Wood Type Color Grain Texture Description Ash Grayish Brown, Brown or Pale Yellow Straight Medium to Coarse Has open pores which make the surface look rough Birch Light Red Brown Straight or Curved Fine and even Can change its color to slightly yellow when exposed to sun Chestnut Whitish to light brown (darker with age) Straight to Spiral or Interlocked Coarse and Uneven After the moisture of wood dries, the grains start splitting on its own Maple Off White Cream Color, Sometimes with a Reddish or Golden Straight, but may be wavy Fine and even Wood colo r fades over time which would change the color of wood Oak Wood Pinkish to Light Reddish Brown Straight Coarse and Uneven May turn darker slightly with time Black Walnut Rich chocolate or Purplish Brown Straight, but may be irregular Medium texture and moderate natural lusture With time color changes to honey like, pale color tone Sugar Pine Light brown to Pale Reddish Brown Straight Even and medium to coarse texture Uneven and blotchy stains while staining pine. Pre conditioned pine gives uniform appearance Douglas Fir Sapwood whitish to pale yellowish or reddish Straight, or slightly wavy Medium to coarse Wood can exhibit wild grain patterns
PAGE 60
60 Table 3 7 Continued Wood Type Color Grain Texture Description Western Red Cedar Reddish or Pinkish Brown to dull brown Straight Coarse and moderate natural lustre Often with random streaks and bands of darker red/brown areas Western Hemlock Not distinct but almost white near bark Straight Coarse and uneven texture Occasionally contains dark streaks and conspicuous growth rings can exhibit interesting grain patterns on flatsawn surface
PAGE 61
61 A human inspector can readily account for the enormous natural variations in the appearance of sawn wood in his mind when deciding the kinds of flaws and the grade of each board. However, these variances are a significant source of complexity for automatic wood inspection systems. This can affect the manufacturing process, could affect the ability to detect the wood, the ability to detect any abnormality and this is the reason we performed an overview of different types of wood. 3.3.1.2 Human related facto rs The process of object detection involves spotting objects in still or moving images captured by a camera. Physical characteristics of an object or individual, such as its height, body width, and length, are crucial identifiers. (Mogre et al., 2022) . In addition to this, edges, colors, and textures is also capable to capture important cues for discriminating humans from the background (Schwartz et al., 2009) . Human skin color recognition is crucial in many sensing system applications. People's skin ton es differ from one another and from one location of the world to another. The difficulties of creating and implementing real time sensing systems is increased when skin color sometimes resembles non skin elements like wooden objects, wall paint, etc. Skin color information, motion signals, texture, and edge features are integrated to track and recognize human skin in applications including Human Computer Interaction (HCI), face detection, hand gesture recognition, and video surveillance, among others. As me ntioned above some of the important factors for the sensing system are the accurate detection of color, texture, and humans themselves. However, when two workers are working in the same environment, it is possible for one human to partially cover the other . So, it becomes important to study the different characteristics of
PAGE 62
62 humans like height, waist size, shoulder width, skin color, and human poses in construction to develop an efficient computer vision system. Height: The mean height for adult men and women aged 20 and over is shown in Tables 3 8 and 3 9 for the years 1999 2000 2015 2016. Age adjusted measurements of the mean height of men grew from 1999 2000 (175.6 cm [69.2 in]) to 2003 2004 (176.4 cm [69.4 in]), and s ubsequently fell till 2015 2016. (175.4 cm [69.1 in]). Both the increasing and decreasing trends were statistically significant. Among men, no significant trends were seen in height over time among those aged 20 39 and 60 and over (Fryar, Kruszon Moran, Gu, & Ogden, 2018) In 1999 2000, the mean age adjusted height for all women was 162.1 cm (63.8 in), while in 2015 2016, it was 161.7 cm (63.7 in). Age adjusted estimates of height overall, for any race and Hispanic subgroup, among women aged 20 39, or 60 an d above, did not show any statistically significant linear trends over time. However, the total crude estimate of mean height for women aged 40 to 59 declined from 162.8 cm to 162.1 cm over the same period, down from 162.1 cm in 1999 2000 to 161.5 cm in 20 15 2016. (Fryar, Kruszon Moran, Gu, & Ogden, 2018) . Studying the sensory system is important because, in the case of a worker who is shorter in height working behind a taller worker, the shorter worker may become partially or completely occluded. This ha s an impact on how well the sensing system can locate and detect humans.
PAGE 63
63 Table 3 8 . Mean men height 2009 2010 2011 2012 2013 2014 2015 2016 Men Age Height (inches) 20 39 69.4 69.5 69.4 69.3 40 59 69.6 69.2 69.4 69.2 60 and over 68.4 68.5 68.6 68.3 Table 3 9 . Mean women height 2009 2010 2011 2012 2013 2014 2015 2016 Women Age Height (centimeter) 20 39 64.2 64.4 64.1 64.0 40 59 64.0 63.9 63.0 63.8 60 and over 62.8 62.8 62.6 62.7 Waist: The mean waist circumference for adult men and women aged 20 and over is estimated in Tables 3 10 and 3 11 for the years 1999 2000 through 2015 2016. According to (Fryar, Kruszon Moran, Gu, Carroll, et al., 2018) , the age adjusted mean waist circumferenc e for men was 102.1 cm (40.2 in) in 2015 2016, up from 99.l cm (39.0 in) in 1999 2000. Women's age adjusted mean waist measurements increased from 92.2 cm (36.3 in) in 1999 2000 to 98.0 cm (38.6 in) in 2015 2016. Overall, both men's and women's waist circu mferences consistently increased linearly over time. (Fryar, Kruszon Moran, Gu, & Ogden, 2018) .
PAGE 64
64 It is important to study the waist size for detection and localization of workers in industrialized construction because a change in body type of two workers working one behind another may create a partial or complete occlusion of workers. Table 3 10 . Mea n men waist 2009 2010 2011 2012 2013 2014 2015 2016 Men Age Waist (inches) 20 39 37.8 37.7 38.2 38.7 40 59 40.6 40.9 40.7 40.7 60 and over 41.5 41.5 42.1 42.0 Table 3 11 . Mean women waist 2009 2010 2011 2012 2013 2014 2015 2016 Women Age Waist (inches) 20 39 36.1 36.5 37.1 37.1 40 59 37.7 38.6 38.9 39.4 60 and over 39.2 38.9 39.1 39.9 Shoulder: The average shoulder width for men in the United States is 16.2 inches (41.1 cm) according to (Wiggermann et al., 2019) the average shoulder width for men in the 20 29 and 30 39 age groups was 16.3 inches (41.4 cm). The average shoulder length of males a ged 40 to 49 was quite comparable to that of their younger counterparts (41.3 cm or 16.3 inches). Men between the ages of 50 and 59 have shoulders that are 16.1 inches, or 41 cm, wide. The average shoulder measurement for
PAGE 65
65 males aged 60 to 69 is 15.9 inches (40.5 cm), which is slightly smaller. (Keizer et al., 2016; Rao et al., 2000) . The average shoulder width for females is 14.4 inches or 36.7 cm, according to anthropometric reference data published by the CDC, which measured the biacromial breadth of 8 ,411 women. Women aged 20 29 measured 14.5 inches across the shoulder, or 36.9 cm, whereas women aged 30 39 measured 14.6 inches across the shoulder, or 37 cm. Women between the ages of 40 and 49 and 50 to 59 have shoulders that are 14.5 inches, or 36.9 cm , wide. Female human adults aged 60 to 69 have shoulders that are, on average, 14.3 inches (36.4 cm) wide. The average shoulder breadth for women aged 70 to 79 was found to be 14.1 inches (35.7 cm). (Keizer et al., 2016; National Center for Health Statistic s (U.S.) & National Health and Nutrition Examination Survey (U.S.), n.d.; Rao et al., 2000) . Table 3 12 summarizes the mean shoulder width of men and women of age 20 and above. Similar to the height and waist of human, shoulder width also affects the wor king of sensing system. When two workers with different shoulder widths are working in proximity, worker with broader shoulders might occlude the worker with narrow shoulders.
PAGE 66
66 Table 3 12 . Mean shoulder width 1988 1994 Age Men Shoulder(inches) Women Shoulder(inches) 20 29 16.3 14.5 30 39 16.3 14.6 40 49 16.3 14.5 50 59 16.1 60 69 15.9 14.3 Skin c olor and e thnicity: Any hue of brown, from the darkest to the lightest, can be found on human skin. Individual changes in skin color result from variations in pigmentation, which are caused by either genetics (inherited from one's biological parents), sun exposure, or both (J ablonski, 2010) . Figure 3 3 shown the New Immigrant Survey skin color scaleIn their work on the National Longitudinal Study of Freshmen, Massey, Charles, Lundy, and Fischer, 2003, first published this scale, which was created by Douglas Massey and Jenn ifer Martin (Fischer et al., 2003) . Even though there hasn't historically been a common way to measure skin tone in social surveys in the United States, the Massey Martin scale looks to have gained popularity in recent years (Hannon & Defina, 2016) . So , we can refer to the scale in Figure 3 3 to distinguish different human skin tone.
PAGE 67
67 Additionally, skin tones change across individuals of various ethnic backgrounds and geographical locations. For instance, the range of skin tones among members of the Asian, African, Caucasian, and Hispanic ethnicities varies and might be white, yellow, or dark. Skin color appearance is also influenced by personal traits like age, sex, and body parts. Posture in c onstruction : By moving the majority of onsite tasks to regulated, offsite production facilities, industrialized building completely transforms the construction sector. However, labor intensive prefabrication methods still exist. (Tehrani & Alwisy, 2022) . Awkward and impr oper postures and motions reduce productivity and increase project costs in the industrialized construction. Workers that must bend or twist excessively, such as in industrialized construction workshops, are prone to awkward body position (Inyang, 2013) . Their muscles are stretched beyond than they should be, which causes work related musculoskeletal diseases (CCOHS, 2019) , which have a number of negative effects such production line delays, significant compensation claims for lost time, and crippling i njuries (Hasan & Jha, 2013; St Vincent et al., 1996) . Figure 3 3 . Skin color chart
PAGE 68
68 Workers perform repeated load carrying, kneeling, twisting, back bending forward, squatting, neck bending, and reaching actions while engaged in industrialized construction. (Antwi Afari et al., 2018; J. Chen et al., 2017; T. H. Lee & Han, 2013b; Li, 2000; Li Kai, 2000; Mattila et al., 1993) . As mentioned in the previous section, co existence of human and robot makes it important for sensors to accurately recognize the human, robot, material, and ot her surrounding objects. It is possible to think of tracking human poses as the process of assuming the locations of the body's joints. One of the most difficult challenges in completing this activity is overcoming "self occlusion," in which one body part blocks another one (N. G. Cho et al., 2013a) . In addition to this, while human is working close to robot, there can be a possible case where human is occluded by robot or robot is partially covered due to human working in front of robot. Due to this, it m ight impact the accuracy of detecting and localizing humans, materials, and robots in manufacturing plant.
PAGE 69
69 Table 3 13 . Factors influencing sensing system Mean Body Size (Inches) Description Sex Men Women Height 68.9 63.6 When woman is working behind men, she might be fully covered by the man in front because of difference in height and the height of woman being less than man. This can be the case when both man and women are in same posture and doing similar kind of task. Same can be wit h two men or two women working together with height difference. Waist 40.4 38.8 Due to the difference of body type of workers working in proximity, workers can get partially or entirely occluded while working behind another worker. Shoulder 16.18 14.4 that would be visible behind other worker with smaller shoulder width or if a worker would be occluded because of larger shoulder width of a worker can be understood. These factors and positioning of the workers would influence the sensing system Ethnicity Whi te, Hispanic, Non Hispanic With change in ethnicity, the skin tone of people changes. As shown in the NIS scale, skin tone can vary from no pigment on skin to the darkest possible skin. This variation in skin tone may have might have some similarity with the variation of wood color. This color similarity might also affect the color recognition ability of camera. Postures Kneeling, Squatting, Bending, Stooping, Arm Elevation, Standing, Walking A challenge which sensor face with recognition of different postures is the problem of self occlusion. Squatting occludes the lower portion of legs, while bending would lead to facing down and occlusion of face. These can influence the working capabilities of sensors.
PAGE 70
70 3.2.1.3 Robot t ypes The employment of collaborative robots has increased productivity and efficiency in the automation process, which has significantly increased human robot collaboration (Rodrigues et al., 2022) . While a person is working alongside a robot in these circumstances, it is vital to take safety measures into account. One way to solve this issue is to estimate a robot's position in order to forecast its future motions and intentions, which will lower the likelih ood that it may collide with nearby objects (Rodrigues et al., 2022) . The RGB D camera's ability to simultaneously capture RGB and depth data has lately gained significant research interest in rigid robot position estimation. Despite the significant advan cement, some problems remain unanswered, such as posture estimation in environments with no texture or structure (Katz & Brock, 2008; H. Yu et al., 2019c) . There is a large variety of articulated robots out in market spanning hundreds of industrial applic ations. Articulated robots are produced in all shapes and sizes to tackle endless applications. Table 3 14 shows list of robot manufacturing companies. Each of this robot manufacturing companies have their own technologies, strategies, products, range, pay load, color and can perform different tasks (ABB Robotics, 2022; Fanuc, 2022; KUKA AG, 2022; Yaskawa, 2022) . A robot that is articulated has rotational joints and up to 10 or more axes. Robot manufacturing companies prefer different color and texture for discussed above, pose estimation of robot has been studied in past. However, challenges of detecting the robot with different vertical and horizontal reach, color and texture, and axis speed has not been studied. These might affect the sensors in pose detection of robot or detection of robot itself in real time.
PAGE 71
71 robotic arm is moving at this speed, the camera ima ges might be blurred which would influence the capability of camera to precisely detect and localize the robots. In case of robot with long robotic arm reach, the shape and structure of the robot can change considerably, this would affect the sensing syste m to precisely detect the robot. Also, in case of color, an object may be perceived to have different colors under different lighting conditions. Hence, robots with different colors and varying lighting can influence the sensing system.
PAGE 72
72 Table 3 1 4 . Different robots and its characteristics Company Vertical Reach Color Description ABB Ltd 0.4 4.4m Orange, Graphic White ABB robots are available with high axis speed. So, recognition of robot at high movement at every point in time is essential for the sensors. Yaskawa Electric Corporation 1.3 3.08m Blue With end connectors connected to the robotic arm, having a multi application capability, recognition of robot during each activity is important Fanuc Robot 0.9 4.7m Yellow, White, Green Fanuc robots have a high reach. With the axis expanded and change in structure of robot, sensors need to accurately recognize the robot Kuka Robot 0.6 3.6m Orange When there is more then one robot working, with different speed and reach, the robots might form different shape every time an arm is moving from one task to other (e.g., picking up wood from and placing it on framing table). There would be occlusion due to mul tiple robots and varying structure of robot will affect the recognition capacity of sensing system
PAGE 73
73 Table 3 14 . Continued Company Vertical Reach Color Description Universal Robot 0.5 1.75m Blue, Silver Universal robots have a mixed color of blue and silver with circular cut edge finish given to their products. While the other robots in this list are huge industrial robot with heavy payload capacity. So, when there is combination of such robotic stations, large variation in the shape, size, color, speed and texture of robot can be seen. For this the sensor system should be well trained to give an accurate and precise recognition and results.
PAGE 74
74 3.3.2 Task Driven F actors Influencing Sensing System The framing workstation in robotics based manufacturing of industrialized construction involves several tasks. Table 3 15 shows a list of wall fabrication tasks. Analyzing these tasks will provide essential information for this r esearch. First, these tasks help us in identifying the postures and body angle of a worker while performing wall framing or assembly. Second, per the postures shown in Table 3 17 , can be combined with the identified tasks. Combining these postures with the tasks would help us perform a pair wise analysis to identifying scenarios which would possibly influence the proposed sensors.
PAGE 75
75 Table 3 1 5 . Tasks in industrialized construction Task Number Description Citation 1 Studs, plates, and pre assembled components nailed together on framing table (Ajweh, 2014; Ayinla et al., 2021; Inyang et al., 2012; Lachance et al., 2022) 2 Measuring and cutting cement plasterboard (CP) on floor 3 Load cement plasterboard on frame from material station 4 Screw board to frame on a framing wall 5 Fabricating of window/door opening, or header of garage openings on the floor 6 Fix window and door pods on framing table 7 Pre cutting of sheathing as per required dimensions on floor 8 Placing, positioning, and fixing sheathing in their position on insulation and sheathing station table 9 Stapling sheathing and checking openings and wall edges 10 Install windows and doors on wall 11 Install cladding system on walls
PAGE 76
76 3.3.2.1 Human p osture and a ngle in i ndustrialized c onstruction As previously mentioned, the human body's position significantly affects how the sensory system functions, and self occlusion between various body parts further increases the complexity of inference issues. Self occlusion presents difficulties for conventional object detection techniques. (N. G. Cho et al., 2013b) . Due to this, it is essential to study different postures and body angles of workers in ind ustrialized construction Generally, the amount of bending from the neutral posture is utilized to determine how each body part, including the torso, shoulder, and elbow, are positioned. A worker is in a neutral posture when all joints are aligned and the re is little physical stress on bones, muscles, nerves, or tendons. Therefore, the least effort is required when standing or sitting (Nath et al., 2017a) . Researchers have demonstrated that it is possible to separate the degree of bend in various body parts into ranges to reduce observational mistakes (Kilbom, 1994; Lowe, 2011; van Wyk et al., 2009) The identified joint angles between each limb are shown in Figure 3 4 below. In this instance, the body has been divided into the upper body, torso, and l ower body. Body postures are a synthesis of many limb movements. (Valero et al., 2017a) . Workers in a typical industrialized construction workstation must adopt a distinct stance for each task. Worker flexion of the trunk (back bending), knee flexion, and arm elevation are all involved in these positions.
PAGE 77
77 Each body part is given a condition based on the rotation of one or more joints with regard to an initial orthostatic position or standing. Consider the flexion of an arm as an example. This can be described as having mild, raised, or substantial elevation . Table 3 16 below provides a summary of these body joint angles. Figure 3 4 . Typical body angles
PAGE 78
78 Table 3 1 6. Postures and their angles For the purpose of this study, we have identified different postures in industrialized construction with the help of past studies. Reaching behind, twisting, working aloft, bending the wrists, kneeling, stooping, forward and backward bending, and squatting are typical examples of awkward posture (T. H. Lee & Han, 2013b; T. H. Lee & Han, 2013b) .Table 3 17 shows the list of postures numbered from 1 6 which will be combined with different tasks and will be used to identify different scenarios influencing the comp uter vision system. Posture Angle Citation Trunk Inclination (Nath et al., 2017b; S. J. Ray & Teizer, 2012; Shen et al., 2017; Valero et al., 2017b) Knee Stooping Kneeling kneeling and calf parallel to floor) floor) Arm Elevation
PAGE 79
79 Table 3 1 7 . Postures in IC Posture Number Posture Author 1 Kneeling (Antwi Afari et al., 2018; Chen et al., 2017; T. H. Lee & Han, 2013a; K. W. Li, 2000; Li Kai, 2000; Mattila et al., 1993; Palikhe et al., 2020; S. J. Ray & Teizer, 2012; Shen et al., 2017) 2 Squatting 3 Bending 4 Arm Elevation (Reaching) 5 Standing 6 Walking 3.3.2.2 Pair wise analysis of t asks and p ostures Tables 3 15 and 3 17 show different postures and tasks that need to be carried out in industrialized construction. Every worker has their own style of performing different tasks, or in other words, with change in worker, the posture in which a particular task is performed c an change. The proposed task driven postures are by observing construction workers in Clayton Homes factory, as well as reviewing other factory videos for wall framing in industrialized construction (finehomebuilding, 2011; Landmark Homes, 2018) . Table 3 18 shows the common postures of workers while performing different tasks for wall framing.
PAGE 80
80 Table 3 1 8 . Task related postures This research utilizes a pair wise analysis of construction worker in wall framing tasks in order to identify potential occlusion cases. Table 3 19 shows the combination of 11 industrialized tasks which can be performed by two workers at a time. The propos ed coupling is based on the logical sequence of identified tasks that follows successor and precedence relationships of prefabrication steps wall framing activity in industrialized construction. This sequence begins with framing of wall, followed by sheath ing and installation of windows/door and finally exterior finishing. T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 P1 x x x P2 x P3 x x x x x x x P4 x x P5 x x x x x x x P6 x x x x x
PAGE 81
81 Table 3 1 9 . Compatible tasks After having identified the compatible tasks, it is important to study human postures and body angles for the identified couples in order to identify self occlusion where one body part occludes other one (i.e., one worker might entirely or partially occlud e the other worker working in close proximity). Table 3 20 and 3 21 describe all the possible cases by pair wise analysis of postures. When These cases will help us to identify some scenarios with humans which would create an obstacle for the proposed comp uter vision system. It should be noted that while T able 3 20 provides an insight to the possible occlusion between two workers, it does not consider the height of the camera and the distance between the two workers (camera height is less or equal of human height, distance between workers equals zero).As such these values will be updated during the T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T1 x x x T2 x x x x x x T3 x x x x x T4 x x x x T5 x x x T6 x x T7 x x x T8 x x T9 x T10 x T11 x
PAGE 82
82 proposed scenarios for camera and LIDAR in order to consider the height of cameras and LIDAR mounted on a tripod (greater than 6ft) and the distance between work er to allow them to conduct their tasks safely.
PAGE 83
83 Table 3 20 . Description of the identified cases from pair wise analysis of postures Case Number Description Body Part Percentage C1 Face Covered 7% C2 Right Arm Covered 6.5% C3 Left Arm Covered 6.5% C4 Right Leg Covered 18.5% C5 Left Leg Covered 18.5% C6 Torso Covered 43% C7 Upper Body Covered (Head + Arms + Torso) 63% C8 Lower Body Covered (Legs) 37% C9 Right Side Covered (right arm + right leg) 25% C10 Left Side Covered (left arm + left leg) 25% C11 Full Body Coverage 100% C12 Partial Body Coverage 90%
PAGE 84
84 Table 3 21 . Pair wise analysis of postures P/P P1 P2 P3 P4 P5 P6 P1 C9, C10, C11, C12 P2 C9, C10, C12 C9, C10, C11, C12 P3 C1, C2, C3, C4, C5, C6, C8 C2, C3, C6, C8 C2, C3, C4, C5, C11, C12 P4 C3, C4, C5, C12 C2, C3, C6, C8 C1, C2, C3, C6 C2, C3, C9, C10, C11, C12 P5 C9, C10, C11, C12 C6, C8, C12 C9, C10, C11, C12 C2, C3, C9, C10, C11, C12 C9, C10, C11, C12 P6 C9, C10, C11, C12 C2, C3, C8, C11, C12 C9, C10, C11, C12 C2, C3, C9, C10, C11, C12 C2, C3, C9, C10, C11, C12 C9, C10, C11, C12 Table 3 20 describes the identified cases which can be seen while a pair of workers is working in a manufacturing setup. According to ( Body Composition | Nutritional Assessment , n.d.) , human body is divided in different parts and a typical body part percentage i s given as shown in figure 3 4 . Case 1 6 shows percentage when full body part is occluded by another worker working in front. Case 7 10 shows a combination of different body parts being covered by another worker in front. Case 11 and 12 describes when a wo rker is entirely or partially covered by another worker. The
PAGE 85
85 partial coverage is according to the table 3 8 and 3 9 which gives the average height for men and women. For case 12, we consider the highest and lowest value from these mean height tables which shows 69.6 in. height for men in age group 40 59, and 62.6 in. height of women for age group 60 and above. According to this, the percentage of coverage when women is working in front of men is 90%. (A) Figure 3 5. Posture based occlusion in the pair wise analysis A) detailed analysis of posture 1 2, B) detailed analysis of posture 2 4, C) detailed analysis of posture 4 6
PAGE 86
86 (B) Figure 3 5. Continued
PAGE 87
87 (C) Figure 3 5. Continued 3.3.2.3 Task d riven m aterial and r obot c riticality Different sizes and types of material can be used in the wall framing. The occlusion of a worker carrying a material would depend on the dimensions of the a cement plasterboard of ¼ in. x 3 ft. x 5 ft. would occlude most portion of the worker. In case if a worker carrying a wood stud of dimensions 2 in. x 4 in. x 8 ft. will not have a significant occlusion. Similarly, in a robotics based manufacturing, di fferent robots can be used as shown in Table 3 14. Robots are available in different horizontal and vertical reach,
PAGE 88
88 varying base dimensions, diameter of the arm. Studying dimensions of material and robots would be an important aspect to determine their cri ticality for a task. Table 3 22 below shows list of some of the materials that can be used in the wall framing activity . In addition to this, Table 3 23 shows the coverage of human worker due to the material. We assume full coverage of a body part if there is even a small coverage due to the material. F o r example, wood stud of 2 in. x 4 in. x 8 ft. cannot cover the entire head , torso, and legs of a kneeling worker, but due to some portion of body would be covered, we consider 100% coverage of head, torso, a nd legs . Also , the positioning of material and human is based in the worst case scenar io where the occlusion of human would be maximum. Similarly, all material discussed in Table 3 22 are used to determine the criticality of material . In addition to this, Table 3 24 considers ABB IRB6620 robot covering the human worker working behind. ABB IRB 6620 has a vertical reach of 2.2m and base of 0.85m. Considering maximum vertical extension of arms , humans working behind the robot can be entire ly occluded.
PAGE 89
89 Table 3 22. Material used for wall assembly Material Dimensions Tasks Wood Stud (2 in. x 4 in. x 8 ft.), (2 in. x 3 in. x 96 in.), (2 in. x 4 in. x 92 5/8 in.), (2 in. x 4 in. x 104 5/8 in.), (1 in. x 4 in. x 3.25 ft.), (1 in. x 2 in. x 8 ft.), (1 in. x 3 in. x 8 ft.) T1 Cement Plasterboard (1/4 in. x 3 ft. x 5 ft.), (1/2 in. x 3 ft. x 5 ft.), (1/2 in. x 4 ft. x 8 ft.), (0.25 in. x 3 ft. x 5 ft.), (0.42 in. x 3 ft. x 5 ft.), (5 ft. x 3 ft. x 7/16 in.) T2, T3, T4 Sheathing (1 in. x 4 ft. x 8 ft.), (7/16 in. 4 ft. x 8 ft.), (11/32 in. x 4 ft. x 8 ft.), (23/32 in. x 4 ft. x 8 ft.), (15/32 in. x 4 ft. x 8 ft.) T7, T8, T9 Doors (24 in. x 80 in.), (30 in. x 80 in.), (36 in. x 80 in.), (32 in. x 80 in.), (20 in. x 80 in.), (48 in. x 80 in.) T6, T10 Windows (21 in. x 45 3/4 in), (30.75 in. x 18.25 in.), (35.375 in. x 59.25 in.), (30.75 in. x 14.25 in.) T6, T10 Wall Cladding (93 in. x 6 in. x 0.8 in.), (1 in. x 6 in. x 8 ft.), (94.5 in. x 4.8 in. x 0.5 in.), (106 in. x 6 in. x 0.5 in.), (6 in. x 93 in. x 0.8 in.) T11
PAGE 90
90 Table 3 23. Material criticality Material Posture Material Dimensions Coverage Wood Stud 2 in. x 4 in. x 8 ft. 87% (head + torso + legs) 50% (torso + head) 1 3 % (arms) 50% (torso + head) 50% (torso + head) 50% (torso + head)
PAGE 91
91 Table 3 23 . Continued Material Posture Material Dimension Coverage Cement Plasterboard All Postures 1/2 in. x 4 ft. x 8 ft. 100% (entire human) Sheathing All Postures 1 in. x 4 ft. x 8 ft. 100% (entire human) Windows 35.37 in. x 59.25 in 100% (entire human) 100% (entire human) 80 % (head + torso + legs ) 93% (torso +arms + legs) 93% (torso +arms + legs) 93% (torso +arms + legs)
PAGE 92
92 Table 3 23 Continued Material Posture Material Dimension Coverage Wall Cladding 106 in. x 6 in. x 0.5 in 87% (head + torso + legs) 80% (head + torso + legs) 80 ( legs + torso ) 50% (head + torso) 87% (head + torso + legs) 87% (head + torso + legs)
PAGE 93
93 Table 3 24. Robot c riticality Robot Posture Coverage ABB IRB 6620 100% (entire body) 93% (torso + arms + legs )
PAGE 94
94 3.4 Identification of Scenarios for RGB D Camera and LiDAR The proposed scenarios aim at investigating the different postures workers can take while performing different tasks or supporting tasks related to wall framing workstations, as identified in Table 3 17. The main purpose of exploring different postures and tasks is to analyze the potential occlusion which will be taking place in a robotic station and determine the setup of the sensing systems RGB D Cameras and LiDARs needed to accurately and efficiently track and localize the materials, humans, and robots i n the described environment. The proposed scenarios are categorized into four groups. The first three groups contain scenarios where humans, materials, and robots are analyzed separately without any interaction with the other groups. In the human group, t he proposed scenarios correspond to the pair wise analysis of the different tasks two human workers can conduct simultaneously in the robotic station. The two workers can do the same task or different compatible tasks as listed in Table 3 19. While conduct ing the tasks, workers can work in proximity to each other or at a distance, but there would be times when one worker is covered or occluded by the other worker. In other words, one worker might partially or entirely cover the other worker and the sensor w ould not be able to capture the activities. The scenarios under the material section analyze different materials used in panel framing workstations including wooden studs, cement plasterboards, windows, and doors. Those materials can be installed on a fra ming table or the floor, and they would be stacked on the material table, the floor, or in some cases might be stacked against the wall. It is important to identify and track all the material we are using. In the case of robot scenarios, as described in T able 3 14, robots are available in different colors, reaches, and axis speeds. As we aim at achieving real time recognition
PAGE 95
95 of activities, sensors are required to recognize robot activities at its top axis speed and change in posture of the robot at every moment. The fourth category includes scenarios describing all the components of a robotic station (i.e., humans, materials, and robots) needed for the manufacturing of floor and wall panels. Those scenarios will ultimately result in the generation of gu idelines for the integration of sensing systems in the robotics based manufacturing of industrialized construction. Figure 3 6. Scenarios for 3D c amera and LiDAR
PAGE 96
96 3.4.1 Scenarios for RGB D Camera 3.4.1.1 Only h uman 1. One worker is bending down to nail/bolt studs together and another worker is performing the same task by kneeling. 2. One worker is kneeling to nail/bolt the frame and the other worker is standing and then bending to cut cement plasterboard. 3. One worker has his/her arm elevated (e.g., hold ing a wooden stud) and the other worker is in a position cutting cement plasterboard. 4. Both the workers have their arms elevated (e.g., holding a wooden stud) 5. Both the workers are kneeling and nailing/bolting CP to the frame. Or both the workers are kneelin g and fabricating the openings. 6. One worker is walking or standing, and another worker is kneeling or bending down to nail/bolt the CP to frame 7. Both the workers are wearing PPE (safety vest, hand gloves, eye protection, and hard hat). 3.4.1.2 Only m aterial 1. Studs arranged on a material table 2. Cement plasterboard stacked on the floor 3. CP and studs placed next to each other 4. Studs with different textures or colors are stacked next to each other 5. Studs are randomly placed on top of each other 6. Cement plasterboard is stacked against the wall 3.4.1.3 Only r obot 1. Accurate detection of a robot while it is operating at its top speed 2. Accurate and full tracking of a robot with its arms at maximum horizontal and vertical reach. 3. Detection of robots with end con nectors attached to the arms. 4. Simultaneous and accurate detection of multiple robots with different speeds, reaches, and colors.
PAGE 97
97 3.4.1.4 Manufacturing ( i ncluding h umans, m aterials, and robots) 1. A human worker is kneeling and cutting a wooden stud or cement plasterboard 2. A human worker is walking with elevated arms carrying a wooden stud and a robot is static in the background 3. One worker is walking towards a framing table, another worker is bending down to nail a wooden and a robot is swinging towards the mat erial table to pick a material. 4. One worker is in the foreground and the other is behind a robot with only his upper body visible. Figure 3 7 shows some examples of scenarios discussed above. In Fig 3 7 ( A ), Task 1 is performed where the worker is carrying wood studs to the framing table to nail them together. Here robot to human occlusion can be seen because of ABB IRB6620 robot in front of wor ker. However, there is no occlusion due to presence of material (i.e., material to human occlusion) or another human (i.e., human to human occlusion). Fig 3 7 ( B ) shows two worker performing Task 3, where one worker is bending to load cement plasterboard t o frame and other worker is walking with cement plasterboard. Here the lower body of human carrying cement plasterboard as highlighted with red rectangle is partially occluded by the worker in front (i.e., human to human occlusion as described in Case 3). However, there is no occlusion due to material (i.e., material to human occlusion) or robot (i.e., robot to human occlusion). Fig 3 7 ( C ) shows two workers performing Task 8, where one worker is bending to place the sheathing in position and the other worke r is holding the sheathing near the framing table. Here the upper body of the worker with a bending posture is occluded by the worker standing in front (i.e., human to human occlusion as described in Case 2). However, there is no occlusion due to robot (i. e., robot to human occlusion) or material (i.e., material to human occlusion).
PAGE 98
98 Figure 3 7 . Example of s cenarios A) t ask 1, B) t ask 3, C) t ask 8 The above described scenarios showcase the need for all scenarios to be considered and tested with the proposed computer vision systems in order to identify the possible ways in which object detection and tracking of humans and activities conducted on indu strialized construction can fail. This analysis can be used to update existing algorithms and train sensing models for complex activities in industrialized construction.
PAGE 99
99 It should be noted that the occlusion in the identified scenarios is significantly in fluenced by the camera specifications and robotic station dimensions. Consequently, mathematical equations can be derived to help construction practitioners know where the camera should be located, what area of the robotic station can be covered, and at wh at point the worker is invisible, partially visible, or fully visible in the camera. To estimate the coverage of human workers, the key parameters for the proposed equations include the field of view (FOV) of the camera, the maximum effective range of the camera, the height at which the camera would be mounted on a tripod, and the average height of a human. Figure 3 8 shows a camera setup for a robotic station with one camera. Figure 3 8 . Human visibility via camera The location based geometric analysis of the human visibility in the FOV of a camera result in the development of the following equations that lead to the calculation of rate of invisibility , a key indicator for the criticality of a posture, of a sin gle worker in accordance with the camera specification and location of the camera:
PAGE 100
100 ( 3 3 ) ( 3 4 ) ( 3 5 ) ( 3 6 ) ( 3 7 ) ( 3 8 ) To account for the proposed pair wise analysis of human workers, the geometric analysis of the above illustrated figure can be extended to account for the visibility and invisibility of two workers where one worker occludes the visibility of another. Figure 3 9 shows the proposed occlusion based geometric analysis of two workers located at two dist ance X 0 , X from the camera. To calculate the occlusion driven invisibility percentage of the worker located at X, the greater distance from the camera, we need to consider two cases: (1) H OX X (Figure 3 9 ) and (2) H OX > H X (Figure 3 1 0 ). The follo wing equations reflect the two cases:
PAGE 101
101 Figure 3 9 . Occlusion based rate of visibility C ase 1 Figure 3 10 . Occlusion based rate of visibility Case 2 ( 3 9 ) ( 3 10 ) ( 3 11 ) ( 3 12 ) ( 3 13 ) It should be noted that the two workers are considered to be in the same line of view of camera (i.e., one worker is exactly behind the other) and the width difference
PAGE 102
102 among workers has not been considered in these equations in order to account for worse c ase scenario. Likewise, the proposed E quations 3 8 , and Equation 3 1 3 calculate two rates of invisibility for a human worker based on their location and possible occlusions from the pair wise analysis. As such, the posture critically analysis will utilize the greater of the two in order to account for worse case scenario 3.4.2 Decision Matrix for Camera high, average height of commercially available tripods, as shown in Figure 3 8 . The height of human worker is obtained from the National Health Statistics Report published i n 2018 (Fryar, Kruszon Moran, Gu, Carroll, et al., 2018) (i.e. , height of a human worker in a standing and walking postures is 5.7 feet). To estimate the height of human workers in the kneeling and bending postures, we use the posture analysis section to divide the human body into three main sections: arms and torso), thigh bone, and lower legs is 63%, 23.2% and 13.8% respectively ( Body Composition | Nutritional Asses sment , n.d.) . So, for human height of 5.7 feet, the upper body of human would measure 3.59 feet, thigh bone will be 1.32 feet, and lower body would be 0.78 feet. With these measurements of joints, height of kneeling worker would be sum of upper body and 3 4 and Table 3 16 The height of human workers in a squat ting posture requires further investigation. (Chung et al., 2003) in their study to determine the stool height while a worker is in
PAGE 103
103 the ground is more comfortable u sing a stool for support. So, in the case of squatting Having estimate the average height of human workers in different posture, the estima tion of the distance between two workers is heavily dependent on the dimension will view 3D the E quations 3 3 and 3 8 , for , the rate of invisibility of a worker in a standing or walking posture will be 0% and, for the rate of in visibility would be 100%. Furthermore, we consider one worker is at 50% visibility from camera 4.32 behind worker 2) As mentioned above, we are assuming two workers working in line of c amera points one behind the other at distance. So, in case of a squatting worker in front of a kneeling worker (Posture 2 E quation 3 10 , the camera point s would touch the E quation 3 11 , the visibility height of kneeling worker will be '. Hence using E quation 3 1 3 , the percentage of non visible portion of worker is 27.90% . In case of standing worker with arms elevated in front of a bending worker (Posture 4 Posture 3), according to E quation 3 10 , the camera points , using equations 3 11 and 3 1 3 , the visibility of worker will be 0.86 and percentage of non visible portion of bending worker will be 84.27 %. With this method and equations derived above, a framework of decision
PAGE 104
104 matrix the utilizes the rate of invisibili illustrated in Table 3 2 5 . The illustrated decision matrix follows the proposed considerations, which include 4.32 the camera. This decision matrix table also proposes the required number of cameras based on rate of invisibility using a rule based analysis as follow: (1) No complete invisibility: A pair wise analysis of worker results in rates of invisibility less than 100% indicates that the two workers are visible from just one camera (2) Single complete invisibility: One 100% rate of invisibility in the pair wise analysis will requires 2 cameras. (3) Multiple complete invisibilities: More than one 100% invisibility will require four cameras in order to have a complete coverage of the workstation. It should be noted that the proposed rule based analysis considers a worse case scenario policy, similar to the proposed equ ation, where the selection of number of cameras is based on two workers in the same line of view of the camera. This setup will be explained in further detail in section 7.5.
PAGE 105
105 Table 3 2 5 . Posture criticality for camera Worker #1 Worker #2 Kneeling P1 Squatting P2 Bending P3 Arm Elevation P4 Standing P5 Walking P6 Total Weighted Posture Criticality No. of Camera Req. Kneeling P1 68.02 % 90.51 % 61.06 % 58.58 % 58.58 % 58.58 % 395.36 % 76 % 1 Squatting P2 27.9 % 37.12 % 25.04 % 21.03 % 21.03 % 21.03 % 153.15 % 29 % 1 Bending P3 86.15 % 100 % 74.94 % 74.38 % 74.38 % 74.38 % 484.23 % 93 % 2 Arm Elevation P4 93.89 % 100 % 84.27 % 80.87 % 80.87 % 80.87 % 520.77 % 100% 2 Standing P5 93.89 % 100 % 84.27 % 80.87 % 80.87 % 80.87 % 520.77 % 100% 2 Walking P6 93.89 % 100 % 84.27 % 80.87 % 80.87 % 80.87 % 520.77 % 100% 2
PAGE 106
106 Using the posture criticality table, the task criticality can be identified as follows: Table 3 2 6 . T ask criticality for camera Task P1 P2 P3 P4 P5 P6 Weighted Criticality 1 x x x 100.00% 2 x x 60 % 3 x x 66 % 4 x x x 76 % 5 x x 42 % 6 x x 66 % 7 x x 60 % 8 x x x 100.00% 9 x x 6 6 % 10 x x 68 % 11 x x x 94 %
PAGE 107
107 Similar to this section, the following section discusses in detail about the scenarios for LiDAR and decision matrix. 3.4. 3 Scenarios for LiDAR There are a few things which needs to be considered for identifying the scenarios for LiDAR. In previous section (Selection of Sensing System), we discussed in detail about diffe rent types of LiDAR and came to a conclusion that Intel RealSense L515 is an appropriate LiDAR device for our purpose. In contrast to existing time of flight technologies, the L515 is a solid state LiDAR depth camera that employs a patented MEMS mirror sca nning technology. L515 has a FOV of 70° x 55° (±3°) and range of 9m as shown in table above. The robotic station we plan of working would majorly have two human workers, two ABB IRB6620 robotic arms, material table and a framing table. LiDAR needs to be mo unted in a way that maximum area of the robotic station is covered, and all the activities performed by workers and robots at every point in time should be recognized and tracked. There were certain placements which were thought upon according to the speci fications of LiDAR and dimensions of the robotic station. Figure 3 11 shows the location of LiDAR which would be the most appropriate in a way that points from LiDAR can cover major part of human and robotic arm. Figure 13 shows the options which were stud ied before selecting the location for LiDAR.
PAGE 108
108 Similar to the scenarios identified for RGB D camera, below mentioned are the scenarios which might influence the working of LiDAR. 3.4.3.1 Only h uman 1. Both the workers are kneeling to bolt/nail the stud and are in same line of view from the LiDAR. 2. One wor ker is kneeling bending and other is squatting. Squatting worker is covering the upper body of the kneeling worker. 3. One worker is kneeling, and the other worker is bending. Squatting worker is covering the legs of bending worker. 4. One worker is bending, an d the face is covered by a worker stooping. 5. Arms of one worker are elevated while kneeling covering the lower body (torso to legs) of the worker standing. 6. One worker is bending and the other worker standing beside is covering the lower body of the bending worker. 7. One worker is walking and the other worker is standing. The worker walking occludes the view for a moment and later clear the view. 8. One worker is bending and the other worker covers the face of worker. 3.4.3.2 Only m aterial 1. Cement Plasterboard sheet is s tacked on ground Figure 3 1 1 . Appropriate placement of LiDAR
PAGE 109
109 2. Wooden studs are supported against the stack of Cement plasterboard 3. Cement plaster board is supported against material table with a window/door cut out 4. Material table has few wooden studs on it and cement plasterboard supported next wooden studs on the material table. 5. Window/doors frames on the material table 6. Wooden studs, cement plasterboard and door/window frames are on the framing table. 3.4.3.3 Only r obot 1. Recognition of both the robots 2. Robot in operation and both the robot arms are in different positions 3. Recognition of robots with end connectors. 4. Self occlusion of the robot due to arm and end connectors in front 3.4.3.4 Manufacturing ( i ncluding humans, materials, and robots) 1. One worker is carrying a piece of stud with elevated arms, other worker is kneeling and near the material table 2. Human is walking near the robot and part of robot is occluded. 3. Human is walking with a cement plasterboard while the other worker is standing with a piece of stud held in hand and gets occluded. 4. Robotic arm is picking up a piece of wooden stud from the material table to framing station while other two workers are bending and nailing/bolting the wooden frame on framing table. 5. Human is walking behind the robot and gets occluded and goes out of FOV
PAGE 110
110 3.4.4 Decision Matrix for LiDAR With a similar concept and E quations 3 3 3 1 3 derived for camera, we make a framework for decision matrix with use of LiDAR. As shown in figure 3 12 , required changes are made to f igure 3 8 for the case of LiDAR. All the measurements for human body, distance between workers and robotic station used i n camera would be same. E quations 3 3 and 3 8 , for Dx be 100% visibility of workers and, for Dx rther we 10.58 behind the other worker) So, in case of a squatting worker in front of a standing worker (Posture 2 Posture 5), the height of s to E quation 3 10 angle. Using E quation 3 11 , the visibility height of standing worker will be 2.46 '. Hence using E quation 3 1 3 , the percentage of non visible portion of worker is 56.84 %. In case of standing worker in front of a squatting worker (Posture 5 Posture 2), according to E quation 3 10 E quation 3 11 and 3 1 3 , visibility of worker will be negative, so the percentage of non visible portion of bending worker will be 100%. This means the squatting worker at back will be entirely occluded by the standing worker in front. In case of a worker with arms el evated in front of a standing worker (Posture 4 Posture 5), although the posture of worker is different, but the height of worker is same. So, the LiDAR points would touch E quation 3 1 3 , the percentage of non -
PAGE 111
111 visible height of worker is 94.38 %. With the same method and using E quations 3 3 3 1 3 , a framework for decision matrix using LiDAR is made, as shown in Table 3 2 7 . Figure 3 1 2 . Human v isibility through LiDAR
PAGE 112
112 Table 3 2 7 . Decision m atrix for L i DAR Worker #1 Worker #2 Kneeling P1 Squatting P2 Bending P3 Arm Elevation P4 Standing P5 Walking P6 Total Weighted Posture Criticality No. of Camera Req. Kneeling P1 90.83% 100% 81.53% 77.89% 77.89% 77.89% 506.03% 87% 2 Squatting P2 56.82% 87.8% 59.23% 56.84% 56.84% 56.84% 374.37% 64% 1 Bending P3 100% 100% 93.41% 90.17% 90.17% 90.17% 563.92% 97% 4 Arm Elevation P4 100% 100% 98.35% 94.38% 94.38% 94.38% 581.49% 100% 4 Standing P5 100% 100% 98.35% 94.38% 94.38% 94.38% 581.49% 100% 4 Walking P6 100% 100% 98.35% 94.38% 94.38% 94.38% 581.49% 100% 4
PAGE 113
113 Table 3 2 8 . Task c riticality for LiDAR Task P1 P2 P3 P4 P5 P6 Criticality 1 x x x 100% 2 x x 63% 3 x x 66% 4 x x x 88% 5 x x 54% 6 x x 66% 7 x x 63% 8 x x x 100% 9 x x 66% 10 x x 67% 11 x x x 97% 3.5 Guidelines for I ntegration of Sensing System in the R obotic B ased M anufacturing of O ffsite C onstruction This section develops guidelines for the construction professionals for efficient integrating sensing system for robotics based manufacturing of industrialized construction. Guidelines are made based on most and least critical tasks and postures, and cases discussed in previous section. Guidelines are divided into 2 subsections; first, based on camera, and second, based on LiDAR. All the scenarios discussed in previous section can be seen on a manufacturing setup. So, these guidelines would help professiona ls implement the sensing system in robotic based manufacturing line.
PAGE 114
114 3.5.1 Guidelines for Camera 3.5.1.1 Task d riven g uidelines Most critical task: Task 1, Studs, plates, and pre assembled components nailed together on framing table and Task 8, Placing, positioning, and fixing sheathing in their position on insulation and sheathing station table is the most critical task according to the decision matrix table. Task 1 and Task 8 can be performed by different postures like Kneeling, Arm Elevation, Standing, or else combination of some of these postures. The percentage of non visible portion of the worker is high while kneeling, standing or arm elevation while standing which makes the task more critical. Least critical task: Task 5, Fabrication of Window/Doo r Opening, or Header of Garage Opening on the Floor is least critical tasks. This is because the postures involved in carrying out these tasks are Squatting and Bending. From the decision matrix table, the criticality of squatting and bending is 29 % and 9 3 %, which is the lowest criticality compared to other postures. So, it makes the task less critical from influencing the sensing system. 3.5.1.2 Posture d riven g uidelines Most critical p osture: Posture 4: Arm Elevation, Posture 5: Standing, and Posture 6: W alking are the most critical postures according to the decision matrix table. This is due to the fact that while a worker is standing, walking or elevating arms for carrying material, the % of non visibility for worker at back is very high. This high occlu sion can make influence the camera to recognize and localize the human in robotic based manufacturing process. Least critical p osture: Posture 2: Squatting is the least critical posture. This is due to the fact that when a worker is squatting, more than half height of the worker is
PAGE 115
115 reduced. Due to this the worker working at back would be visible enough to get recognized by the camera. Fr om the framework for decision matrix for camera, it can be seen that there is 29 % non visibility of worker at back . 3.5.1.3 Factor d riven g uidelines Previously we discussed about the color of wood and human skin. One of the advantages of RGB D camera is it can conveniently acquire color, texture, and contour of an object. Fig 3 4 and Table 3 7 describes the different skin color and wood color respectiv ely. Skin tone 1 or 2 on NIS scale (Fig. 3 4 ) and chestnut (color: whitish to light brown) have some similarities. So, when the human with NIS scale skin tone 1 or 2 is partially occluded with only an arm visible, the proposed system can misinterpret human arm with a wooded stud. 3.5.1.4 S cenario d riven g uidelines Manufacturing scenario where both the workers, robots and material are included in a RGB D camera frame, it would highly influence the capabilities of camera. Scenario 3 in Manufacturing section f or RGB D camera talks about a scene where one worker is walking towards framing table, another worker is bending down to nail a wooden and robot is swinging towards the material table to pick a material. RGB D camera can be influenced because: first, all t he components are in the same frame which might occlude each other; second, worker working with the wood might have the similar skin tone as the color of wood, this would influence the color and texture recognition capability of RGB D camera.
PAGE 116
116 3.5.2 Guidel ines for LiDAR 3.5.2.1 Task d riven g uidelines Most critical task: assembled components nailed their position on insulation and sheathing station according to the matrix created above for LiDAR. Task 1 and Task 8 can be carried out by bending, standing, and walking. These postures have a high criticality compared to other postures . So, the risk of non visibility of worker also increases. In addition to this, when different material and robot is considered alongside the task performed, risk also increases. Least critical task: e least critical task according to the framework of decision matrix for LiDAR shown above. Task 5 can be performed by squatting and bending, which has a comparative low weighted criticality compared to other postures. 3.5.2.2 Posture d riven g uidelines Most critical p osture: Posture 4: Arm Elevation, Posture 5: Standing, and Posture 6: Walking are the most critical postures. This is because when workers are working in pair and one worker is right behind a standing or walking worker, the non visibility of wor ker is very high. The non decrease in distance between two workers. Most critical p osture: Posture 2: Squatting is the least critical posture. This is because when the worker is squatting, other worker worki ng right behind has high visibility due to the reduced height of worker in front. This would allow enough LiDAR
PAGE 117
117 points to reflect from the worker at back and allow LiDAR to detect and localize the worker accurately. 3.5.2.3 Scenario d riven g uidelines In above section for selection of LiDAR as one of the sensing systems, an experiment conducted at Wuhan University using Velodyne 64E was discussed. If a similar scenario is considered in our case where Task 1 which is one the most critical is ca rried out by one worker and the other worker is walking behind. Though worker behind is not fully occluded but if there are fewer laser points reflecting from them, it will be unable to abstract additional worker or object from the point cloud at these fra mes. 3.5.3 Occlusion Based Framework for Camera/LiDAR Section 7.3.2 broadly describes different tasks and postures which needs to be carried out in industrialized construction. These tasks and postures influence the working of sensor system and risk of hum ans, objects or machines not getting recognized. Table 2 5 and Table 2 7 , gives the percentage of non visible portion of workers while they perform different tasks with different postures. This would help us decide the number of cameras and LiDAR which shoul d be used according to the number of people working, tasks performed by different workers and percentage of worker body covered due to presence of other workers. When there is 100% coverage of a worker due to other worker in front, it would require 2 came ras or 2 LiDAR to detect both the workers as shown in Figure 3 13 ( A ). So, for Table 2 5 and Table 2 7 , postures having 100% coverage should have 2 camera/LiDAR to safely recognize the human. When different tasks are performed simultaneously in robotics based manufacturing, more then one posture can have 100% coverage. These cases would require more than 2 cameras to accurately recognize the
PAGE 118
118 human posture. So, we can use 4 camera/LiDAR on each side of robotic station as shown in Figure 3 13 ( C ). If there is les s then 100% coverage, one camera/LiDAR would be sufficient for the detection and localization of workers in robotic workstation as shown in Figure 3 13 ( A ). The above mentioned method for deciding the number of cameras and LiDAR required is limited to the a ssumption that the workstation falls within view range of camera or point cloud range of LiDAR. So, if the workstation goes outside the view, we need to double or triple the number of cameras or LiDAR used. 3.5.4 Case S tudy Small scale manufacturing plant : Let us consider a small scale manufacturing plant which has two workers working in a robotic station having one robot. According to Table 3 17, the workers in this manufacturing plant are performing Task 2: Measuring and cutting cement plasterboard on floo r, and Task 3: Load cement plasterboard on frame from material station. The postures required to carry out Task 2 are Kneeling(P1) Figure 3 1 3 . Number of c amera/LiDAR
PAGE 119
119 and Walking(P6), whereas postures required for Task 3 are Bending(P3) and Walking(P6). While performing these tasks, interact ion of different working postures like P1 P3, P1 P6, P6 P3, P6 P6, P3 P1, P3 P6, and P6 P1 can be seen. According to the framework of decision matrix prepared for camera and LiDAR. there is no posture which would have 100% non visibility due to other worke r. Hence, use of one camera/LiDAR would be enough for accurately detecting and localizing workers in a robotics based manufacturing of industrialized construction. Setup of camera/LiDAR according to Figure 3 13 ( A ) can be utilized for a small scale manufact uring plant with two workers performing limited number of tasks. In addition to the criticality of worker postures, presence of material and robot could also play a role in deciding the number of camera/LiDAR. Task 2 and 3 requires use of cement plasterbo ard. Cement plasterboards available in market are of different sizes like (¼ in. x 3 ft. x 5ft.) or (½ in. x 4 ft. x 8 ft.). Depending on the size of material the percentage of non visible worker and criticality can be determined. Similarly, robotic arms w hich can be used in robotics based manufacturing are available in different horizontal and vertical reach and base. With change in dimensions of robot, the percentage of non visibility of worker would also be influence. Knowing the exact dimensions of robo t and material used can help further determine the number of camera/LiDAR which needs to be used in a robotics based manufacturing of industrialized construction.
PAGE 120
120 Table 3 2 9 . Small s cale m anufacturing t asks Task Description Worker MC RC T2 Measuring and cutting cement plasterboard (CP) on floor W1 M1 R1 T3 Load cement plasterboard on frame from material station W2 M2 R2 Large scale manufacturing plant: Consider a large scale manufacturing plant capable of performing all the tasks required for wall framing. There are 11 people working in the presence of numerous robotic stations, with each worker in charge of one of the tasks listed in Table 3 17. The pre sence of 11 people in this configuration will result in a methodical flow of labor, with operations being completed one after the other in the needed sequence to successfully complete wall framing. To complete all the duties listed in Table 3 17, personnel will be required to perform all the postures (i.e., P1 P6). As a result, all the position combinations possible with these six postures could be seen. This would create a scenario where more then one posture would have 100% Figure 3 1 4 . Small scale manufacturing setup
PAGE 121
121 non visibility of worker due to other. Hence for accurate recognition and localization of workers in a large scale manufacturing setup as described above would require 4 camera/LiDAR. In addition to this, performing wall framing would require different materials like wood studs, cement plasterboard, sheathing, doors, windows etc. All these materials are available in different sizes. Knowing the exact dimensions of these materials would help us to further determine the criticality of task and postures. Also, as there are multiple robotic stations in a large scale manufacturing plant, knowing the exact horizontal and vertical reach, base, circumference of the arm can help us determine the non visibility of workers in different scenarios. Table 3 30 . Large scale manufacturing tasks Task Description Worker MC RC T1 Studs, plates, and pre assembled components nailed together on framing table W1 M1 R1 T2 Measuring and cutting cement plasterboard (CP) on floor W2 M2 R2 T3 Load cement plasterboard on frame from material station W3 M3 R3 T4 Screw board to frame on a framing wall W4 M4 R4
PAGE 122
122 Table 3 30 . Continued Task Description Worker MC RC T5 Fabricating of window/door opening, or header of garage openings on the floor W5 M5 R5 T6 Fix window and door pods on framing table W6 M6 R6 T7 Pre cutting of sheathing as per required dimensions on floor W7 M7 R7 T8 Placing, positioning, and fixing sheathing in their position on insulation and sheathing station table W8 M8 R8 T9 Stapling sheathing and checking openings and wall edges W9 M9 R9 T10 Install windows and doors on wall W10 M10 R10 T11 Install cladding system on walls W11 M11 R11
PAGE 123
123 Figure 3 15. Large scale manufacturing setup
PAGE 124
124 CHAPTER 4 CONCLUSION This thesis evaluated the real life scenarios of integrating sensing systems in robotic based manufacturing of industrialized construction. First, an overview of different sensing systems was performed. Seven sensing technologies were identified by reviewing past studies, and selections were made based on accuracy, resolution, advantages and drawbacks, and specification s of sensors that satisfy our demand. According to the study, RealSense LiDAR Camera L515 was determined to be an efficient devic e. Further, implementing the selected sensing system in IC would influence its functioning based on the presence of essential components like robotic arm, humans, and materials. So, a study of characteristic factors influencing t he selected sensor, like h uman body characteristics, robot color and speed, and material texture and color was performed to successfully integrate the sensor in robotic s based manufacturing of IC. Later, different tasks and postures were identified for IC, and a pair wise analysis of postures and angles of workers and tasks performed for wall framing in IC was performed. This analysis was performed to identify real life scenarios witnessed in the manufacturing process of wall framing. In the last stage, based on the human postures and tasks identified for IC in the previous step, real life scenarios based on the activities performed by all the components were recognized, which would influence the proposed sensing system. Scenarios for RGB D camera and LiDAR were determined based on only humans, only materials, only robots, and manufacturing (including human, material, and robot).
PAGE 125
125 Lastly, a framework for decision matrix is created based on the risk associated with tasks and the non visibility of postures. With this decision matrix, gu idelines were built in light of the findings of this thesis. Guidelines are based on the most and least critical activity, posture, scenario, and real life situations, which can be seen for wall framing in Industrialized Construction . This guideline is dev eloped for people to understand how to integrate those technologies into the manufacturing line. 4.1 Significance of the R esearch Both academia and the industry would take advantage of this thesis. Three primary benefits can be especially exercised by aca demia. First, the comprehensive literature review from Chapter 2 summarized several academic studies and provided a brief background for integration of sensors in the robotics based manufacturing of IC. Secondly, the influence of human postures and tasks o n the sensing system is studied. While many studies about the performance and influence of different factors on sensors exist, this thesis limits the methodology only to Industrialized Construction. Finally, unique scenarios were identified influencing the proposed sensing technologies. Though many academic studies investigate the benefits and challenges of implementing sensor systems in robotics, this paper explicitly identifies the strengths, challenges, opportunities, and weaknesses in integrating sensor s in robotics based manufacturing of Industrialized Construction. 4.2 Suggestions for Future Studies I n the pair wise analysis of different tasks and postures, we assumed that only two workers would work in the industrialized construction set up. Also, woo den studs and cement plasterboard are only considered for the wall framing. However, industrialized construction consists of more than two people working, and it requires
PAGE 126
126 much more than just wooden studs and board to complete wall framing. So, this researc h can be further extended by adding more workers and materials.
PAGE 127
127 APPENDIX CALCULATIONS F OR THE DECISION MATRIX For Camera 1. P1 P1 Kneeling Kneeling 1.64 1.57 1.57 = 68.02 % 2. P1 P2 Kneeling Squatting 1.64 0.35 90.51 % 3. P1 P3 Kneeling Bending 1.64 2.13 61.06 % 4. P1 P4 Kneeling Arm Elevation 1.64 79 = 2.36 2.36 58.59 % 5. P1 P4 Kneeli ng Standing Similar to 4 because of same height 6. P1 P4 Kneeling Walking Similar to 4 because of same height 7. P2 P1 Squatting Kneeling 1.64 + 3.54 27.90 % 8. P2 P 2 Squatting Squatting 1.64 = 2.3 2 2.32 37 .12 % 9. P2 P 3 Squatting Bending 2.32 + 1. 7 8 = 4.1 5.47 4.1 5 . 47 25.04 % 10. P2 P1 Squatting Arm Elevation
PAGE 128
128 = 4.33 4.33 24.30 % 11. P3 P1 Bending Kneeling 1.23 0.68 )/4.91 = 86.15 % 12. P3 P2 Bending Squatting 1.23 ve 100 % 13. P3 P3 Bending Bending 1.78 = ve (4.91 1.23)/4.91 = 74.94 % 14. P3 P4 Bending Arm Elevation 1.23 + 0.23 1. 46 74.38 % 15. P3 P5 Bending Standing Similar to 14 because of same height 16. P3 P6 Bending Walking Similar to 14 because of same height 17. P4 P1 Arm Elevation Kneeling 0.3 0.3 93.89 % 18. P4 P2 Arm Elevation Squatting 1.09 ve 100 % 19. P4 P3 Arm Elevation Bending 1.09 0.23 0.86 84.27 % 20. P4 P4 Arm Elevation Arm Elevation 1.09 1.09 80.87 % 21. P4 P5 Arm Elevation Standing
PAGE 129
129 Similar to 19 because of same height 22. P4 P6 Arm Elevation Walking Similar to 19 because of same height 23. All the other cases with standing and walking will be similar to Arm Elevation non vis For LiDAR 1. P1 P1 Kneeling Kneeling 1.64 0.45 0.45 90.89 % 2. P1 P2 Kneeling Squatting 0.45 ve 100 % 3. P1 P3 Kneeling Bending 0.45 1.01 1.01 81.53 % 4. P1 P4 Kneeling Arm Elevation 0.47 79 26 1.26 77.89 % 5. P1 P4 Kneeling Standing Similar to 4 because of same height. 6. P1 P4 Kneeling Walking Similar to 4 because of same height. 7. P2 P1 Squatting Kneeling 0.45 2.12 2.12 56.86 % 8. P2 P2 Squatting Squatting 0.45 0.45 87.80 % 9. P2 P3 Squatting Bending 0.45 2.23 2.23 59.23 %
PAGE 130
130 10. P2 P4 Squatting Arm Elevation 0.45 2.46 2.46 56.84 % 11. P2 P4 Squatting Standing Similar to 10 because of same height 12. P2 P4 Squatting Walking Similar to 10 because of same height 13. P3 P1 Bending Kneeling 0.36 ve 100 % 14. P3 P2 Bending Squatting 0.32 2.01 ve 10 0% 15. P3 P3 Bending Bending 0.36 0.36 93.41 % 16. P3 P4 Bending Arm Elevation 0.36 0.56 0.56 90.17 % 17. P3 P5 Bending Standing Similar to 16 because of same height 18. P3 P6 Bending Walking Similar to 16 because of same height 19. P4 P1 Arm Elevation Kneeling 0.32 0.79 = ve 100 % 20. P4 P2 Arm Elevation Squatting 0.32 ve 100% 21. P4 P3 Arm Elevation Bending 0.32 0.23 = 0.09
PAGE 131
131 0 .09 98.35 % 22. P4 P4 Arm Elevation Arm Elevation 0.32 0.32 94.38 % 23. P4 P5 Arm Elevation Standing Similar to 2 2 because of same height 24. P4 P6 Arm Elevation Walking Similar to 2 2 because of same height All the other scenarios for standing and walking will be similar to Arm Elevation because
PAGE 132
132 LIST OF REFERENCES Abanda, F. H., Tah, J. H. M., & Ch eung, F. K. T. (2017a). BIM in off site manufacturing for buildings. Journal of Building Engineering, 14, 89 102. https://doi.org/10.1016/J.JOBE.2017.10.002 ABB Robotics. (2022). Industrial Robots Portfolio | ABB Robotics. https://new.abb.com/products/robo tics/industrial robots Abioye, S. O., Oyedele, L. O., Akanbi, L., Ajayi, A., Davila Delgado, J. M., Bilal, M., Akinade, O. O., & Ahmed, A. (2021). Artificial intelligence in the construction industry: A review of present status, opportunities and future ch allenges. Journal of Building Engineering, 44, 103299. https://doi.org/10.1016/J.JOBE.2021.103299 Adepoju, O., Aigbavboa, C., Nwulu, N., & Onyia, M. (2022). Re skilling Human Resources for Construction 4.0. https://doi.org/10.1007/978 3 030 85973 2 Ahn, S. , Han, S., Asce, A. M., Al Hussein, M., & Asce, M. (2019a). 2D Drawing Visualization Framework for Applying Projection Based Augmented Reality in a Panelized Construction Manufacturing Facility: Proof of Concept. Journal of Computing in Civil Engineering, 33(5), 04019032. https://doi.org/10.1061/(ASCE)CP.1943 5487.0000843 Ahn, S., Han, S., Asce, A. M., Al Hussein, M., & Asce, M. (2019b). 2D Drawing Visualization Framework for Applying Projection Based Augmented Reality in a Panelized Construction Manufacturing Facility: Proof of Concept. Journal of Computing in Civil Engineering, 33(5), 04019032. https://doi.org/10.1061/(ASCE)CP.1943 5487.0000843 Ajweh, Z. (2014). A Framework for Design of Panelized Wood Framing Prefabrication Utilizin g Multi panels and Crew Balancing. https://doi.org/10.7939/R3C39Q Akhloufi, M. A., ben Larbi, W., & Maldague, X. (2007). Framework for color texture classification in machine vision inspection of industrial products. Conference Proceedings IEEE Internati onal Conference on Systems, Man and Cybernetics, 1067 1071. https://doi.org/10.1109/ICSMC.2007.4413687 Akinci, B., Boukamp, F., Gordon, C., Huber, D., Lyons, C., & Park, K. (2006). A formalism for utilization of sensor systems and integrated project models for active construction quality control. Automation in Construction, 15(2), 124 138. https://doi.org/10.1016/J.AUTCON.2005.01.008 Alarifi, A., Al Salman, A., Alsaleh, M., Alnafessah, A., Al Hadhrami, S., Al Ammar, M. A., & Al Khalifa, H. S. (2016). Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances. Sensors 2016, Vol. 16, Page 707, 16(5), 707. https://doi.org/10.3390/S16050707
PAGE 133
133 Alwisy, A., Bu Hamdan, S., Barkokebas, B., Bouferguene, A., & Al Hussein, M. (2018). A BIM based automati on of design and drafting for manufacturing of wood panels for modular residential buildings. International Journal of Construction Management, 19(3), 187 205. https://doi.org/10.1080/15623599.2017.1411458 Anandan, T. (2015). Association for Advanced Autom ation. https://www.automate.org/industry insights/calculating your roi for robotic automation cost vs cash flow Andriyanov, N. (2022). Estimating Object Coordinates Using Convolutional Neural Networks and Intel Real Sense D415/D455 Depth Maps. 1 4. https:/ /doi.org/10.1109/ITNT55410.2022.9848700 Andriyanov, N., Khasanshin, I., Utkin, D., Gataullin, T., Ignar, S., Shumaev, V., & Soloviev, V. (2022). Intelligent System for Estimation of the Spatial Position of Apples Based on YOLOv3 and Real Sense Depth Camera D415. Symmetry 2022, Vol. 14, Page 148, 14(1), 148. https://doi.org/10.3390/SYM14010148 Antwi Afari, M. F., Li, H., Yu, Y., & Kong, L. (2018). Wearable insole pressure system for automated detection and classification of awkward working postures in constr uction workers. Automation in Construction, 96, 433 441. https://doi.org/10.1016/J.AUTCON.2018.10.004 Apogeeweb. (2021, August 18). What is an Ultrasonic Sensor? https://www.apogeeweb.net/electron/what is an ultrasonic sensor.html Arents, J., Abolins, V., Judvaitis, J., Vismanis, O., Oraby, A., & Ozols, K. (2021). Human Robot Collaboration Trends and Safety Aspects: A Systematic Review. Journal of Sensor and Actuator Networks 2021, Vol. 10, Page 48, 10(3), 48. https://doi.org/10.3390/JSAN10030048 Aryal, M. (2018). Object Detection, Classification, and Tracking for Autonomous Vehicle. ${sadil.baseUrl}/handle/123456789/602 Aryal, M., & Baine, N. (2019). Detection, Classification, and Tracking of Objects for Autonomous Vehicles. ION 2019 International Technical Meeting Proceedings, 870 883. https://doi.org/10.33012/2019.16731 Autodesk. (2019). Industrialized Construction in Academia . Ayinla, K., Cheung, F., & Skitmore, M. (2021). Process Waste Analysis for Offsite Production Methods for House Construction: A Case Study of Factory Wall Panel Production. Journal of Construction Engineering and Management, 148(1), 05021011. https://doi.org/10.1061/(ASCE)CO.1943 7862.0002219 Azim, A., & Aycard, O. (2012). Detection, classification and tracking of m oving objects in a 3D environment. IEEE Intelligent Vehicles Symposium, Proceedings, 802 807. https://doi.org/10.1109/IVS.2012.6232303
PAGE 134
134 Bai, L., Zhao, Y., & Huang, X. (2022). Enabling 3D Object Detection with a Low Resolution LiDAR. IEEE Embedded Systems Le tters. https://doi.org/10.1109/LES.2022.3170298 Batistic, L., & Tomic, M. (2018). Overview of indoor positioning system technologies. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 20 18 Proceedings, 473 478. https://doi.org/10.23919/MIPRO.2018.8400090 Bello, S. A., Yu, S., Wang, C., Adam, J. M., & Li, J. (2020). Review: Deep learning on 3D point clouds. In Remote Sensing (Vol. 12, Issue 11). MDPI AG. https://doi.org/10.3390/rs1211172 9 Beltrán, J., Guindel, C., Moreno, F. M., Cruzado, D., GarcÃa, F., & de La Escalera, A. (2018). BirdNet: A 3D Object Detection Framework from LiDAR Information. IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2018 November, 3517 3523. https://doi.org/10.1109/ITSC.2018.8569311 Bertram, N., Fuchs, S., Mischke, J., Palter, R., Strube, G., & Woetzel, J. (2019). Modular construction: From projects to products. Bhateja, A., Shrivastav, A., Chaudhary, H., Lall, B., & Kalra, P. K. (2021). Depth analysis of kinect v2 sensor in different mediums. Multimedia Tools and Applications, 1 26. https://doi.org/10.1007/S11042 021 11392 Z/FIGURES/14 Bock, T. (2007). Construction robotics. Autonomous Robots, 22(3), 201 209. https://doi.org/10.1007/S105 14 006 9008 5/FIGURES/20 Bock, T. (2015a). Robot Oriented Design Thomas Bock, Thomas Linner Google Books. https://books.google.com/books?hl=en&lr=&id=JTsPCAAAQBAJ&oi=fnd&pg=PR 9&ots=CCNITNYlKA&sig=rC6UFHJ9 WaiZZqwlsAny8fZRvY#v=onepage&q&f=false Bock, T. (2015b). The future of construction automation: Technological disruption and the upcoming ubiquity of robotic s. Automation in Construction, 59, 113 121. https://doi.org/10.1016/J.AUTCON.2015.07.022 Body composition | Nutritional assessment. (n.d.). Retrieved October 19, 2022, from https://nutritionalassessment.mumc.nl/en/body composition Borcs, A., Nagy, B., & Be nedek, C. (2013). On board 3D object perception in dynamic urban scenes. 4th IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2013 Proceedings, 515 520. https://doi.org/10.1109/COGINFOCOM.2013.6719301
PAGE 135
135 Breitbarth, A. M. M., Hake, C., & Notni, G. (2021a). Measurement accuracy and practical assessment of the lidar camera Intel RealSense L515. Https://Doi.Org/10.1117/12.2592570, 11782, 218 229. https://doi.org/10.1117/12.2592570 Breitbarth, A. M. M., Hake, C., & Notni, G. (2021b). Me asurement accuracy and practical assessment of the lidar camera Intel RealSense L515. Https://Doi.Org/10.1117/12.2592570, 11782, 218 229. https://doi.org/10.1117/12.2592570 Breitbarth, A. M. M., Hake, C., & Notni, G. (2021c). Measurement accuracy and pract ical assessment of the lidar camera Intel RealSense L515. Https://Doi.Org/10.1117/12.2592570, 11782, 218 229. https://doi.org/10.1117/12.2592570 Brilakis, I. (2012). Construction worker detection in video frames for initializing vision trackers. Automation in Construction, 28, 15 25. https://doi.org/10.1016/J.AUTCON.2012.06.001 Brosque, C., Galbally, E., Khatib, O., & Fischer, M. (2020). Human Robot Collaboration in Construction: Opportunities and Challenges. HORA 2020 2nd International Congress on Human Computer Interaction, Optimization and Robotic Applications, Proceedings. https://doi.org/10.1109/HORA49412.2020.9152888 Bu, F., Le, T., Du, X., Vasudevan, R., & Johnson Roberson, M. (2020). Pedestrian Planar LiDAR Pose (PPLP) Network for Oriented Pedestri an Detection Based on Planar LiDAR and Monocular Images. IEEE Robotics and Automation Letters, 5(2), 1626 1633. https://doi.org/10.1109/LRA.2019.2962358 C. Balaguer. (2004). Soft robotics concept in construction industry. https://ieeexplore.ieee.org/abstra ct/document/1438602 Carlevaris Bianco, N., Ushani, A. K., & Eustice, R. M. (2016). University of Michigan North Campus long term vision and lidar dataset. International Journal of Robotics Research, 35(9), 1023 1035. https://doi.org/10.1177/0278364915614638/ASSET/IMAGES/LARGE/10.1177_0 278364915614638 FIG2.JPEG CCOHS. (2019). Canadian Centre for Occupational Health and Safety. https://www.ccohs.ca/ Chachich, A., Bellone, J., & Smith, S. (2015). Vehicle Clearance Literat ure Review. Chan, T. O., Lichti, D. D., & Belton, D. (2013). Temporal Analysis and Automatic Calibration of the Velodyne HDL 32E LiDAR System. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, II 5 W2(5W2), 61 66. https:/ /doi.org/10.5194/ISPRSANNALS II 5 W2 61 2013
PAGE 136
136 through supervised motion tensor decomposition. Automation in Construction, 77, 67 81. https://doi.org/10.1016/J.AUTCON.2017 .01.020 Cheng, T., Teizer, J., Migliaccio, G. C., & Gatti, U. C. (2013). Automated task level thoracic posture data. Automation in Construction, 29, 24 39. https://doi.org/10.1016/ J.AUTCON.2012.08.003 Cheok, G. S., & Stone, W. C. (1999). Non Intrusive Scanning Technology for Construction Assessment. https://www.nist.gov/publications/non intrusive scanning technology construction assessment Cho, K., Baeg, S. H., & Park, S. (2012). Mu ltiple object detection and classification on uneven terrain using multi channel lidar for UGV. Https://Doi.Org/10.1117/12.919596, 8387, 318 326. https://doi.org/10.1117/12.919596 Cho, N. G., Yuille, A. L., & Lee, S. W. (2013b). Adaptive occlusion state es timation for human pose tracking under self occlusions. Pattern Recognition, 46(3), 649 661. https://doi.org/10.1016/J.PATCOG.2012.09.006 Chong, T. J., Tang, X. J., Leng, C. H., Yogeswaran, M., Ng, O. E., & Chong, Y. Z. (2015). Sensor Technologies and Simu ltaneous Localization and Mapping (SLAM). Procedia Computer Science, 76, 174 179. https://doi.org/10.1016/j.procs.2015.12.336 Chung, M. K., Lee, I., & Kee, D. (2003). Effect of stool height and holding time on postural load of squatting postures. Internati onal Journal of Industrial Ergonomics, 32(5), 309 317. https://doi.org/10.1016/S0169 8141(03)00050 7 Condotta, I. C. F. S., Brown Brandl, T. M., Pitla, S. K., Stinn, J. P., & Silva Miranda, K. O. (2020). Evaluation of low cost depth cameras for agricultura l applications. Computers and Electronics in Agriculture, 173, 105394. https://doi.org/10.1016/J.COMPAG.2020.105394 Costin, A., Wehle, A., & Adibfar, A. (2019). Leading Indicators A Conceptual IoT Based Framework to Produce Active Leading Indicators for Construction Safety. Safety 2019, Vol. 5, Page 86, 5(4), 86. https://doi.org/10.3390/SAFETY5040086 Damani, A., Shah, H., & Shah, K. (2015). Global Positioning System for Object Tracking. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10. 1.1.695.4736&rep=rep1&t ype=pdf Davidoff, J. B., & Ostergaard, A. L. (2007). The role of colour in categorial judgements. Http://Dx.Doi.Org/10.1080/02724988843000069, 40(3), 533 544. https://doi.org/10.1080/02724988843000069
PAGE 137
137 Davila Delgado, J. M., Oyedele, L., Ajayi, A., Akanbi, L., Akinade, O., Bilal, M., & Owolabi, H. (2019). Robotics and automated systems in construction: Understanding industry specific challenges for adoption. Journal of Building Engineering, 26, 100868. https://doi.org/10.1016/J.JOBE.20 19.100868 Deans, M., & Hebert, M. (2001). Experimental comparison of techniques for localization and mapping using a bearing only sensor. Experimental Robotics VII, 395 404. https://doi.org/10.1007/3 540 45118 8_40 Dissanayake, G., Huang, S., Wang, Z., & R anasinghe, R. (2011). A review of recent developments in Simultaneous Localization and Mapping. 2011 6th International Conference on Industrial and Information Systems, ICIIS 2011 Conference Proceedings, 477 482. https://doi.org/10.1109/ICIINFS.2011.6038 117 Dubendorf, V. A. (2003). RFID. Wireless Data Technologies, 161 181. https://doi.org/10.1002/0470861355.CH9 Elena Maria BARALIS Ing Andrea, S. (n.d.). NN based approach for Object Detection and 6DoF Pose Estimation with ToF Cameras in Space. Eric, N., & Jang, J. W. (2017). Kinect depth sensor for computer vision applications in autonomous vehicles. International Conference on Ubiquitous and Future Networks, ICUFN, 531 535. https://doi.org/10.1109/ICUFN.2017.7993842 Fanuc. (2022). Industrial robots for sm arter automation. https://www.fanuc.eu/pl/en/robots Fischer, M. J., Charles, C. Z., Lundy, G., & Massey, D. S. (2003). The Source of the Universities. Fryar, C. D., Kruszon Moran, D. , Gu, Q., Carroll, M., & Ogden, C. L. (2018). National Health Statistics Reports, Number 160, August 4, 2021. National Health Statistics Reports Number, 160. https://www.cdc.gov/nchs/products/index.htm. Fryar, C. D., Kruszon Moran, D., Gu, Q., & Ogden, C. L. (2018). National Health Statistics Reports, Number 122. Fu, L., Gao, F., Wu, J., Li, R., Karkee, M., & Zhang, Q. (2020a). Application of consumer RGB D cameras for fruit detection and localization in field: A critical review. Computers and Ele ctronics in Agriculture, 177, 105687. https://doi.org/10.1016/J.COMPAG.2020.105687 Fuersattel, P., Plank, C., Maier, A., & Riess, C. (2017). Accurate laser scanner to camera calibration with application to range sensor evaluation. IPSJ Transactions on Comp uter Vision and Applications 2017 9:1, 9(1), 1 12. https://doi.org/10.1186/S41074 017 0032 5
PAGE 138
138 Fukuda, Y., Feng, M., Narita, Y., Kaneko, S., & Tanaka, T. (2010). Vision based displacement sensor for monitoring dynamic response using robust object search algo rithm. Proceedings of IEEE Sensors, 1928 1931. https://doi.org/10.1109/ICSENS.2010.5689997 Funck, J. W., Zhong, Y., Butler, D. A., Brunner, C. C., & Forrer, J. B. (2003). Image segmentation algorithms applied to wood defect detection. Computers and Electro nics in Agriculture, 41(1 3), 157 179. https://doi.org/10.1016/S0168 1699(03)00049 8 Gao, H., Cheng, B., Wang, J., Li, K., Zhao, J., & Li, D. (2018). Object Classification Using CNN Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment. IEEE T ransactions on Industrial Informatics, 14(9), 4224 4230. https://doi.org/10.1109/TII.2018.2822828 GarcÃa de Soto, B., Agustà Juan, I., Hunhevicz, J., Joss, S., Graser, K., Habert, G., & Adey, B. T. (2018). Productivity of digital fabrication in constructio n: Cost and time analysis of a robotically built wall. Automation in Construction, 92, 297 311. https://doi.org/10.1016/J.AUTCON.2018.04.004 Gibson, E. (1979). Principles of perceptual learning and development. http://garfield.library.upenn.edu/classics197 9/A1979HM62700001.pdf Giretti, A., Carbonari, A., Vaccarini, M., Giretti, A., Carbonari, A., & Vaccarini, M. (2012). Ultra Wide Band Positioning Systems for Advanced Construction Site Management. New Approach of Indoor and Outdoor Localization Systems. htt ps://doi.org/10.5772/48260 Global Positioning System | Uses, Advantages & Disadvantages of Global Positioning System. (n.d.). Retrieved September 23, 2022, from https://dreamcivil.com/global positioning system/ GPS.gov: GPS Overview. (n.d.). Retrieved Sept ember 19, 2022, from https://www.gps.gov/systems/gps/ Grenzdorffer, T., Gunther, M., & Hertzberg, J. (2020). YCB M: A Multi Camera RGB D Dataset for Object Recognition and 6DoF Pose Estimation. Proceedings IEEE International Conference on Robotics and Au tomation, 3650 3656. https://doi.org/10.1109/ICRA40945.2020.9197426 Grunnet Jepsen, A., Sweetser, J. N., Winer, P., Takagi, A., & Woodfill, J. (n.d.). Projectors for Intel ® RealSense TM Depth Cameras D4xx. Gu, Y., Lo, A., & Niemegeers, I. (2009b). A survey of indoor positioning systems for wireless personal networks. IEEE Communications Surveys and Tutorials, 11(1), 13 32. https://doi.org/10.1109/SURV.2009.090103
PAGE 139
139 Driving Car Works IEEE Spectrum. https ://spectrum.ieee.org/how google self driving car works Gurau, L., Timar, M. C., Porojan, M., & Ioras, F. (2013). Image Processing Method as a Supporting Tool for Wood Species Identification. In Wood and Fiber Science (pp. 303 313). https://wfs.swst.org/ind ex.php/wfs/article/view/1966 Haas, C. (2006). Tracking the Location of Materials on Construction Job Sites Geometric Control Methods for Industrialized Construction View project Real time process control for industrial pipe modules View project. Article in Journal of Construction Engineering and Management. https://doi.org/10.1061/(ASCE)0733 9364(2006)132:9(911) Halmetschlager Funek, G., Suchi, M., Kampel, M., & Vincze, M. (2019). An empirical evaluation of ten depth cameras: Bias, precision, lateral noise, different lighting conditions and materials, and multiple sensor setups in indoor environments. IEEE Robotics and Automation Magazine, 26(1), 67 77. https://doi.org/10.1109/MRA.2018.2852795 Halterman, R., & Bruch, M. (2010). Velodyne HDL 64E lidar for unm anned surface vehicle obstacle detection. Https://Doi.Org/10.1117/12.850611, 7692, 123 130. https://doi.org/10.1117/12.850611 Han, S., & Lee, S. (2013). A vision based motion capture and recognition framework for behavior based safety management. Automatio n in Construction, 35, 131 141. https://doi.org/10.1016/J.AUTCON.2013.05.001 Hannon, L., & Defina, R. (2016). Reliability Concerns in Measuring Respondent Skin Tone by Interviewer Observation. Public Opinion Quarterly, 80(2), 534. https://doi.org/10.1093/P OQ/NFW015 Hasan, A., & Jha, K. N. (2013). Safety incentive and penalty provisions in Indian construction projects and their impact on safety performance. Http://Dx.Doi.Org/10.1080/17457300.2011.648676, 20(1), 3 12. https://doi.org/10.1080/17457300.2011.648 676 Heikkilä, T., Ahola, J. M., Viljamaa, E., & Järviluoma, M. (2010). An interactive 3D sensor system and its programming for target localizing in robotics applications. Proceedings of the IASTED International Conference on Robotics, Robo 2010, 89 96. htt ps://doi.org/10.2316/P.2010.703 056 Heng, L., Greg, C., Wong, J., & Skitmore, M. (2016). Real Time Locating Systems Applications in Construction. http://eprints.qut.edu.au/91956/3/91956.pdf Hofman, E., Halman, J. I. M., & Ion, R. A. (2006). Variation in Housing Design: Identifying Customer Preferences. Http://Dx.Doi.Org/10.1080/02673030600917842, 21(6), 929 943. https://doi.org/10.1080/02673030600917842
PAGE 140
140 Horcajo De La Cruz, D. (2021). Analysis of RGB D images through Computer Vision for robot grasping of industrial parts. Huang, G. (2019). Visual Inertial Navigation: A Concise Review . Ibrahim, N. (2009). Parliamentary interpreting in Malaysia: A case study. Meta, 54(2), 357 369. https://doi.org/10.7202/037686AR Information Fusion (FUSION), 2014 17th International Conference on. (n.d.). Intel. (n.d.). LiDAR Camera L515 Intel® RealSense TM Depth and Tracking Cameras. Retrieved September 29, 2022, from https://www.intelrealsense.com/lidar camera l515/ Inyang, N. (2013). A Framework for ergonomic a ssessment of residential construction tasks. ERA. Inyang, N., Han, S., Al Hussein, M., & El Rich, M. (2012). A VR Model of Ergonomics and Productivity Assessment in Panelized Construction Production Line Structural Insulated Panel System (SIPs) for Residen tial and Commercial Projects View project Reassessment of Mobile Crane Ground Support View project. https://doi.org/10.1061/9780784412329.109 Jablonski, N. (2010). Skin Coloration. Jiang, C., Wang, Z., Liang, H., & Tan, S. (2022). A Fast and High Performan ce Object Proposal Method for Vision Sensors: Application to Object Detection. IEEE Sensors Journal, 22(10), 9543 9557. https://doi.org/10.1109/JSEN.2022.3155232 Jiang, G., Yin, L., Jin, S., Tian, C., Ma, X., & Ou, Y. (2019). A Simultaneous Localization an d Mapping (SLAM) Framework for 2.5D Map Building Based on Low Cost LiDAR and Vision Fusion. Applied Sciences 2019, Vol. 9, Page 2105, 9(10), 2105. https://doi.org/10.3390/APP9102105 Jin, R., Gao, S., Cheshmehzangi, A., & Aboagye Nimo, E. (2018). A holistic review of off site construction literature published between 2008 and 2018. Journal of Cleaner Production, 202, 1202 1219. https://doi.org/10.1016/J.JCLEPRO.2018.08.195 Jin, S., Tian, C., Ma, X., & Ou, Y. (2019). A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low Cost LiDAR and Vision Fusion. Applied Sciences 2019, Vol. 9, Page 2105, 9(10), 2105. https://doi.org/10.3390/APP9102105
PAGE 141
141 Joo, K. J., Pyo, J. W., Ghosh, A., In, G. G., & Kuc, T. Y. (2021). A pallet recognition and rotation algorithm for autonomous logistics vehicle system of a distribution center. International Conference on Control, Automation and Systems, 2021 October, 1387 1390. https://doi.org/10.23919/ICCAS52745.2021.9649741 Katz, D., & Brock, O. (2008). Manipulating Articulated Objects With Interactive Perception. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4543220 Kaur, M., Sandhu, M., Mohan, N., & Sandhu, P. (2011). RFID Technology Principles, Advantages, Limitations & Its Applicatio ns . http://ijcee.org/papers/306 E794.pdf Keizer, A., van Elburg, A., Helms, R., & Dijkerman, H. C. (2016). A virtual reality full body illusion improves body image disturbance in anorexia nervosa. PLoS ONE, 11(10). https://doi.org/10.1371/JOURNAL.PONE.016 3921 Kelly, C., Wilkinson, B., Abd Elrahman, A., Cordero, O., & Lassiter, H. A. (2022). Accuracy Assessment of Low Cost Lidar Scanners: An Analysis of the Velodyne HDL–32E and Livox Mid–40’s Temporal Stability. Remote Sensing 2022, Vol. 1 4, Page 4220, 14(17), 4220. https://doi.org/10.3390/RS14174220 Khan, M. U., Zaidi, S. A. A., Ishtiaq, A., Bukhari, S. U. R., Samer, S., & Farman, A. (2021). A Comparative Survey of LiDAR SLAM and LiDAR based Sensor Technologies. Proceedings of the 2021 Moh ammad Ali Jinnah University International Conference on Computing, MAJICC 2021. https://doi.org/10.1109/MAJICC53071.2021.9526266 Khoury, H., Chdid, D., Oueis, R., Elhajj, I., & Asmar, D. (2015). Infrastructureless approach for ubiquitous user location trac king in construction environments. Automation in Construction, 56, 47 66. https://doi.org/10.1016/J.AUTCON.2015.04.009 Khoury, H. M., & Kamat, V. R. (n.d.). Wlan Based User Position Tracking For Contextual Information Access In Indoor Construction Environm ents. Kilbom, A. (1994). Assessment of physical exposure in relation to work related musculoskeletal disorders what information can be obtained from systematic observations on JSTOR. https://www.jstor.org/stable/40966300#metadata_info_tab_contents Kim, D . W., Moon, S. M., Cho, H. H., & Kang, K. I. (2011). An Application of Safety Management for Tunnel Construction Using RTLS Technology. Korean Journal of Construction Engineering and Management, 12(2), 12 20. https://doi.org/10.6106/KJCEM.2011.12.2.12
PAGE 142
142 Kim, J. U., Kwon, J., Kim, H. G., Lee, H., & Ro, Y. M. (2018). Object Bounding Box Critic Networks for Occlusion Robust Object Detection in Road Scene. Proceedings International Conference on Image Processing, ICIP, 1313 1317. https://doi.org/10.1109/ICI P.2018.8451034 King, N., Bechthold, M., Kane, A., & Michalatos, P. (2014). Robotic tile placement: Tools, techniques and feasibility. Automation in Construction, 39, 161 166. https://doi.org/10.1016/J.AUTCON.2013.08.014 Kolakowski, M., Djaja Josko, V., & K olakowski, J. (2022). Static LiDAR Assisted UWB Anchor Nodes Localization. IEEE Sensors Journal, 22(6), 5344 5351. https://doi.org/10.1109/JSEN.2020.3046306 Kontovourkis, O., & Tryfonos, G. (2020). Robotic 3D clay printing of prefabricated non conventional wall components based on a parametric integrated design. Automation in Construction, 110, 103005. https://doi.org/10.1016/J.AUTCON.2019.103005 Koyuncu, H., Yang, S. H. H., & Hua Yang, S. (2010). A Survey of Indoor Positioning and Object Locating Systems. IJCSNS International Journal of Computer Science and Network Security, 10(5), 121. https://www.researchgate.net/publication/267956851 Kubitz, O., Berger, M. O., Perlick, M., & Dumoulin, R. (1997). Application of radio frequency identification devices to su pport navigation of autonomous mobile robots. IEEE Vehicular Technology Conference, 1, 126 130. https://doi.org/10.1109/VETEC.1997.596332 KUKA AG. (2022). Industrial robots . https://www.kuka.com/en us/products/robotics systems/industrial robots Kulkarni, M., Junare, P., Deshmukh, M., & Rege, P. P. (2021). Visual SLAM Combined with Object Detection for Autonomous Indoor Navigation Using Kinect V2 and ROS. 2021 IEEE 6th International Conference on Computing, Communication and Automation, ICCCA 2021, 478 482. https://doi.org/10.1109/ICCCA52192.2021.9666426 Kundu, A. S., Mazumder, O., Dhar, A., & Bhaumik, S. (2016). Occupancy grid map generation using 360° scanning xtion pro live for indoor mobile robot navigation. 2016 IEEE 1st International Conference on Cont rol, Measurement and Instrumentation, CMI 2016, 464 468. https://doi.org/10.1109/CMI.2016.7413791 Kwon, S. K., Hyun, E., Lee, J. H., Lee, J., & Son, S. H. (2016). A Low Complexity Scheme for Partially Occluded Pedestrian Detection Using LIDAR RADAR Sensor Fusion. Proceedings 2016 IEEE 22nd International Conference on Embedded and Real Time Computing Systems and Applications, RTCSA 2016, 104. https://doi.org/10.1109/RTCSA.2016.20
PAGE 143
143 Lachance, E., Lehoux, N., & Blanchet, P. (2022). Automated and robotized proc esses in the timber frame prefabrication construction industry: A state of the art. 2022 IEEE 6th International Conference on Logistics Operations Management, GOL 2022. https://doi.org/10.1109/GOL53975.2022.9820541 Landaluce, H., Arjona, L., Perallos, A., Falcone, F., Angulo, I., & Muralter, F. (2020). A Review of IoT Sensing Applications and Challenges Using RFID and Wireless Sensor Networks. Sensors (Basel, Switzerland), 20(9). https://doi.org/10.3390/S20092495 Landmark Homes. (2018). ACQBUI LT How We Build YouTube. https://www.youtube.com/watch?v=yc0IsmycvJo&t=45s&ab_channel=Landmark Homes Lankin, R., Kim, K., & Huang, P. C. (2020). ROS Based Robot Simulation for Repetitive Labor Intensive Construction Tasks. IEEE International Conference on Industrial Informatics (INDIN), 2020 July, 206 213. https://doi.org/10.1109/INDIN45582.2020.9442192 LD MRS400001. (n.d.). Retrieved September 29, 2022, from https://www.nexinstrument.com/LD MRS400001 Lecrosnier, L., Khemmar, R., Ragot, N., Decoux, B., R ossi, R., Kefi, N., & Ertaud, J. Y. (2020). Deep Learning Based Object Detection, Localisation and Tracking for Smart Wheelchair Healthcare Mobility. https://doi.org/10.3390/ijerph18010091 Lee, T. H., & Han, C. S. (2013b). Analysis of working postures at a construction site using the OWAS method. International Journal of Occupational Safety and 250. https://doi.org/10.1080/10803548.2013.11076983 Lehtola, V., Nüchter, A., & Goulette, F. (2022). Advances in Mobile Mapping Technol ogies. www.mdpi.com/journal/remotesensing Li, K. W. (2000). Improving postures in construction work. Ergonomics in Design, 8(4), 11 16. https://doi.org/10.1177/106480460000800403/ASSET/106480460000800403.FP. PNG_V03 Li, L., Zhang, R., Chen, L., Liu, B., Zha ng, L., Tang, Q., Ding, C., Zhang, Z., & Hewitt, A. J. (2022). Spray drift evaluation with point clouds data of 3D LiDAR as a potential alternative to the sampling method. Frontiers in Plant Science, 13. https://doi.org/10.3389/fpls.2022.939733 Liao, B., L i, J., Ju, Z., & Ouyang, G. (2018). Hand gesture recognition with generalized hough transform and DC CNN using realsense. 8th International Conference on Information Science and Technology, ICIST 2018, 84 90. https://doi.org/10.1109/ICIST.2018.8426125
PAGE 144
144 LiDA R Sensors for Robotic Systems | Mapix Technologies. (n.d.). Retrieved September 19, 2022, from https://www.mapix.com/lidar applications/lidar robotics/ Lin, C. M., Tsai, C. Y., Lai, Y. C., Li, S. A., & Wong, C. C. (2018). Visual Object Recognition and Pose Estimation Based on a Deep Semantic Segmentation Network. IEEE Sensors Journal, 18(22), 9370 9381. https://doi.org/10.1109/JSEN.2018.2870957 Lin, J., & Zhang, F. (2020b). Loam livox: A fast, robust, high precision LiDAR odometry and mapping package for LiDARs of small FoV. Proceedings IEEE International Conference on Robotics and Automation, 3126 3131. https://doi.org/10.1109/ICRA40945.2020.9197440 Lin, X., Wang, F., Yang, B., & Zhang, W. (2021). Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS Challenged Environments. Remote Sensing 2021, Vol. 13, Page 506, 13(3), 506. https://doi.org/10.3390/RS13030506 Lou, Y., Zhang, T., Tang, J., Song, W., Zhang, Y., & Chen, L. (2018). A Fast Algorithm for Rail Extraction Using M obile Laser Scanning Data. Remote Sensing 2018, Vol. 10, Page 1998, 10(12), 1998. https://doi.org/10.3390/RS10121998 Lourenco, F., & Araujo, H. (2021b). Intel realsense SR305, D415 and L515: Experimental evaluation and comparison of depth estimation. VISIG RAPP 2021 Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 4, 362 369. https://doi.org/10.5220/0010254203620369 Lowe, B. D. (2011). Accuracy and validity of observational es timates of wrist and forearm posture. Https://Doi.Org/10.1080/00140130310001653057, 47(5), 527 554. https://doi.org/10.1080/00140130310001653057 Ma, S., Kim, K., & Novak, D. (2018). A Robotic Wearable Exoskeleton for Construction Charact erizing Human Box Lift ing Behavior Using Wearable Inert ial Mot ion Sensors. https://doi.org/10.1061/9780784481288.003 Maalek, R., & Sadeghpour, F. (2013). Accuracy assessment of Ultra Wide Band technology in tracking static resources in indoor c onstruction scenarios. Automation in Construction, 30, 170 183. https://doi.org/10.1016/J.AUTCON.2012.10.005 Maddern, W., Pascoe, G., Linegar, C., & Newman, P. (2016). 1 year, 1000 km: The Oxford RobotCar dataset. Http://Dx.Doi.Org/10.1177/0278364916679498 , 36(1), 3 15. https://doi.org/10.1177/0278364916679498 Mäenpää, T., Viertola, J., & Pietikäinen, M. (2003). Optimising Colour and Texture Features for Real time Visual Inspection. Pattern Analysis & Applications 2003 6:3, 6(3), 169 175. https://doi.org/10 .1007/S10044 002 0179 1
PAGE 145
145 Magar, J. (2021). Implementation of Safety Management System in Managing Construction Projects. International Journal of Creative Research Thoughts . https://doi.org/10.5281/ZENODO.4505877 Maschinen, B., Investition, A., Beschaffunge n, G., Ersatzbeschaffungen, B., & Mittelherkunft, S. (n.d.). Model Control Systems . Mattila, M., Karwowski, W., & Vilkki, M. (1993). Analysis of working postures in hammering tasks on building construction sites using the computerized OWAS method. Applied Ergonomics, 24(6), 405 412. https://doi.org/10.1016/0003 6870(93)90172 6 Mazhar, O., Babuska, R., & Kober, J. (2021). GEM: Glare or Gloom, i Can Still See You End to End Multi Modal Object Detection. IEEE Robotics and Automation Letters, 6(4), 6321 6328. https://doi.org/10.1109/LRA.2021.3093871 McKinsey & Company. (2020). The next normal in construction. Mehdi, S. M., Naqvi, R. A., & Mehdi, S. Z. (2021). Autonomous object detection and tracking robot using Kinect v2. 4th International Conference on Innovative Computing, ICIC 2021. https://doi.org/10.1109/ICIC53490.2021.9692932 Migniot, C., & Ababsa, F. (n.d.). 3D Human Tracking in a Top View Using Depth Information Recorded by the Xtion Pro Live Camera. Milan, H. F., Perano, K. M., & Gebremedhin, K. G. (n.d.). Procedures for measuring 3 D surface area and surface temperature of livestock. https://doi.org/10.6084/m9.figshare.5176879 Minemura, K., Liau, H., Monrroy, A., & Kato, S. (2018). LMNet: Real time multiclass object detection on CPU using 3D LiDAR. Proceedings of 2018 3rd Asia Pacific Conference on Intelligent Robot Systems, ACIRS 2018, 28 34. https://doi.org/10.1109/ACIRS.2018.8467245 Miroslaw Skibniewski, B. J. (1988). Framework for DecisionMaking on Implementing Robotics in Construction. Jo urnal of Computing in Civil Engineering, 2(2), 188 201. https://doi.org/10.1061/(ASCE)0887 3801(1988)2:2(188) Moffatt, A., Platt, E., Mondragon, B., Kwok, A., Uryeu, D., & Bhandari, S. (2020). Obstacle Detection and Avoidance System for Small UAVs using a LiDAR. 2020 International Conference on Unmanned Aircraft Systems, ICUAS 2020, 633 640. https://doi.org/10.1109/ICUAS48674.2020.9213897 Mogre, N., Bhagat, S., Bhoyar, K., Hadke, H., & Ingole, P. (2022). Real Time Object Detection And Height Measurement . In ternational Research Journal of Modernization in Engineering Technology and Science Www.Irjmets.Com @International Research Journal of Modernization in Engineering, 3825, 2582 5208. https://doi.org/10.1016/j.procs.2018.05.144
PAGE 146
146 Mohd Nasrun Mohd Nawi. (2015, May). (PDF) Malaysian Industrialised Building System (IBS): A Review of Studies. https://www.researchgate.net/publication/274780826_Malaysian_Industrialised_ Building_System_IBS_A_Review_of_Studies Moosmann, F., & Stiller, C. (2011). Velodyne SLAM. IEEE Int elligent Vehicles Symposium, Proceedings, 393 398. https://doi.org/10.1109/IVS.2011.5940396 Mouats, T., Aouf, N., Chermak, L., & Richardson, M. A. (2015). Thermal Stereo Odometry for UAVs. https://doi.org/10.1109/JSEN.2015.2456337 Mu, Z., Liu, L., Jia, L., Zhang, L., Ding, N., & Wang, C. (2022). Intelligent demolition robot: Structural statics, collision detection, and dynamic control. Automation in Construction, 142, 104490. https://doi.org/10.1016/J.AUTCON.2022.104490 Mutiara Sari, D., Rizaldy Pratama, A., Pramadihanto, D., & Sandi Marta, B. (2022). 3D Teknologi Informasi Dan Komunikasi, 7(1), 59 66. https://doi.org/10.25139/INFORM.V7I1.4570 Nagata, M., Baba, N., Tachikawa, H., Sh imizu, I., & Aoki, T. (1997). Steel Frame Welding Robot Systems and Their Application at the Construction Site. Computer Aided Civil and Infrastructure Engineering, 12(1), 15 30. https://doi.org/10.1111/0885 9507.00043 NAHB. (2004). Mass custom builders: R edefining customers options. National Association of Home Builders (NAHB). Nath, N. D., Akhavian, R., & Behzadan, A. H. (2017a). Ergonomic analysis of Ergonomics, 62, 107 117. https ://doi.org/10.1016/J.APERGO.2017.02.007 National Center for Health Statistics (U.S.), & National Health and Nutrition Examination United States, 2011 al Health and Nutrition Examination Survey. Nebiker, S., Meyer, J., Blaser, S., Ammann, M., Rhyner, S., Nüchter, A., & Goulette, F. (2021). Outdoor Mobile Mapping and AI Based 3D Object Detection with Low Cost RGB D Cameras: The Use Case of On Street Parki ng Statistics. https://doi.org/10.3390/rs13163099 Neupane, C., Koirala, A., Wang, Z., & Walsh, K. B. (2021). Evaluation of Depth Cameras for Use in Fruit Localization and Sizing: Finding a Successor to Kinect v2. Agronomy 2021, Vol. 11, Page 1780, 11(9), 1 780. https://doi.org/10.3390/AGRONOMY11091780
PAGE 147
147 Niskanen, M., Silvén, O., & Kauppinen, H. (n.d.). Color And Texture Based Wood Inspection With Non Supervised Clustering. Novkovic, T., Pautrat, R., Furrer, F., Breyer, M., Siegwart, R., & Nieto, J. (2020). Obj ect Finding in Cluttered Scenes Using Interactive Perception. Proceedings IEEE International Conference on Robotics and Automation, 8338 8344. https://doi.org/10.1109/ICRA40945.2020.9197101 O, A. C., Michael Gallaher Kyle Clark Sutton Daniel Lapidus Zack T Oliver Troy J Scott Dallas W Wood Manuel A Gonzalez Elizabeth G Brown Joshua Fletcher, C. P., Cornwallis Road, E., & Alan O, by C. (2019). Economic Benefits of the Global Positioning System (GPS). Object Classication, Detection and State Estimation Usin g YOLO v3 Deep Neural Network and Sensor Fusion of Stereo Camera and LiDAR ProQuest. (n.d.). Retrieved September 18, 2022, from https://www.proquest.com/docview/2597492530?fromopenview=true&pq origsite=gscholar Palikhe, S., Yirong, M., Choi, B. Y., & Lee, D. E. (2020). Analysis of Musculoskeletal Using Simulation. Sustainability 2020, Vol. 12, Page 5693, 12(14), 5693. https://doi.org/10.3390/SU12145693 Pan, M., & Pan, W. (2020). St akeholder Perceptions of the Future Application of Construction Robots for Buildings in a Dialectical System Framework Stability and Connections: Structural Analysis of Adopting Modular Integrated Construction for High rise Buildings (up to 40 storeys) in Hong Kong View project Risks in Research informed Teaching View project. Article in Journal of Management in Engineering. https://doi.org/10.1061/(ASCE)ME.1943 5479.0000846 Papageorgiou, C., & Poggio, T. (2000). A Trainable System for Object Detection. Int ernational Journal of Computer Vision, 38(1), 15 33. Park, M. W., Makhmalbaf, A., & Brilakis, I. (2011). Comparative study of vision tracking methods for tracking of construction site resources. Automation in Construction, 20(7), 905 915. https://doi.org/1 0.1016/J.AUTCON.2011.03.007 Pastor, J. M., Balaguer, C., Rodriguez, F. J., & Diez, R. (2001). Computer Aided Architectural Design Oriented to Robotized Facade Panels Manufacturing. Computer Aided Civil and Infrastructure Engineering, 16(3), 216 227. https: //doi.org/10.1111/0885 9507.00227 Pavón Pulido, N., López Riquelme, J. A., & Feliú Batlle, J. J. (2020). IoT Architecture for Smart Control of an Exoskeleton Robot in Rehabilitation by Using a Natural User Interface Based on Gestures. Journal of Medical Sy stems, 44(9), 1 10. https://doi.org/10.1007/S10916 020 01602 W/FIGURES/8
PAGE 148
148 Payne, J. S. (2020). Autonomous Interior Mapping Robot Utilizing Lidar Localization And Mapping. Pereira, M., Silva, D., Santos, V., & Dias, P. (2016). Self calibration of multiple LI DARs and cameras on autonomous vehicles. Robotics and Autonomous Systems, 83, 326 337. https://doi.org/10.1016/J.ROBOT.2016.05.010 Pham, T. T. D., Nguyen, H. T., Lee, S., & Won, C. S. (2017). Moving object detection with kinect v2. 2016 IEEE International Conference on Consumer Electronics Asia, ICCE Asia 2016. https://doi.org/10.1109/ICCE ASIA.2016.7804827 Pradhananga, N., & Teizer, J. (2013). Automatic spatio temporal analysis of construction site equipment operations using GPS data. Automation in Constru ction, 29, 107 122. https://doi.org/10.1016/J.AUTCON.2012.09.004 Pritschow, G., Dalacker, M., Kurz, J., & Gaenssle, M. (1996). Technological aspects in the development of a mobile bricklaying robot. Automation in Construction, 5(1), 3 13. https://doi.org/1 0.1016/0926 5805(95)00015 1 Priyantha, N. B. (2001). The Cricket Indoor Location System. Qi, B., Razkenari, M., Costin, A., Kibert, C., & Fu, M. (2021a). A systematic review of emerging technologies in industrialized construction. Journal of Building Engineering, 39, 102265. https://doi.org/10.1016/J.JOBE.2021.102265 Qian, X., Liu, X., & Ye, C. (2018). A Gaussian mixture model based visual feature matching scheme for small object detection from RGB D data. 2017 IEEE International Conference on Real Time Computing and Robotics, RCAR 2017, 2017 July, 91 96. https://doi.org/10.1109/RCAR.2017.8311841 Raju, V. B., & Sazonov, E. (2022). FOODCAM: A Novel Structured Light Stereo Imaging System for Food Portion Size Estimation. Sensors 2022, Vol. 22, Pa ge 3300, 22(9), 3300. https://doi.org/10.3390/S22093300 Rakhimkul, S., Kim, A., Pazylbekov, A., & Shintemirov, A. (2019). Autonomous object detection and grasping using deep learning for design of an intelligent assistive robot manipulation system. Confere nce Proceedings IEEE International Conference on Systems, Man and Cybernetics, 2019 October, 3962 3968. https://doi.org/10.1109/SMC.2019.8914465 Rao, S., Joshi, S., & Kanade, A. (2000). Growth in some physical dimensions in relation to adolescent growth spurt among rural Indian children. Annals of Human Biology, 27(2), 127 138. https://doi.org/10.1080/030144600282244 Ray, S. J., & Teizer, J. (2012). Real time construction worker posture analysis for ergonomics training. Advanced Engineering Informatics, 2 6(2), 439 455. https://doi.org/10.1016/J.AEI.2012.02.011
PAGE 149
149 Razkenari, M., Bing, Q., Fenner, A., Hakim, H., Costin, A., & Kibert, C. J. (2019). Industrialized Construction: Emerging Methods and Technologies. Computing in Civil Engineering 2019: Data, Sensing, and Analytics Selected Papers from the ASCE International Conference on Computing in Civil Engineering 2019, 352 359. https://doi.org/10.1061/9780784482438.045 RealSense, I. (2020). Intel ® RealSense TM LiDAR Camera L515 Datasheet Intel ® RealSense TM Li DAR Camera L515. www.intel.com/design/literature.htm. Reed, K. A. (2002). The Role of the CIMsteel Integration Standards in Automating the Erection and Surveying of Constructional Steelwork. https://www.nist.gov/publications/role cimsteel integration stand ards automating erection and surveying constructional Ridolfi, M., van de Velde, S., Steendam, H., & de Poorter, E. (2018). Analysis of the Scalability of UWB Indoor Localization Solutions for High User Densities. Sensors (Basel, Switzerland), 18(6). https ://doi.org/10.3390/S18061875 Robu.in. (n.d.). Buy Intel RealSense LiDAR Depth Camera L515 | Robu.in. Retrieved September 29, 2022, from https://robu.in/product/intel realsense lidar depth camera l515/ Rodrigues, I. R., Dantas, M., Oliveira, A., Gibson Barb osa, F. ·, Bezerra, D., Souza, R., Valéria, M., Patricia, M. ·, Endo, T., Kelner, J., Djamel, ·, & Sadok, H. (2022). A framework for robotic arm pose estimation and movement prediction based on deep and extreme learning models. https://doi.org/10.48550/arx iv.2205.13994 Roynard, X., Deschaud, J. E., & Goulette, F. (2018). Paris Lille 3D: A large and high quality ground truth urban point cloud dataset for automatic segmentation and classification. Https://Doi.Org/10.1177/0278364918767506, 37(6), 545 557. https://doi.org/10.1177/0278364918767506 Sarmadi, H., Muñoz Salinas, R., Ãlvaro BerbÃs, M., Luna, A., & Medina Carnicer, R. (2021). Joint Scene and Object Tracking for Cost Effective Augmented Reali ty Guided Patient Positioning in Radiation Therapy. https://www.microsoft.com/en us/hololens Sawhney, A., Riley, M., & Irizarry, J. (2020). Construction 4.0. https://doi.org/10.1201/9780429398100 Schwartz, W. R., Kembhavi, A., Harwood, D., & Davis, L. S. ( 2009). Human Detection Using Partial Least Squares Analysis. Proceedings of the IEEE International Conference on Computer Vision, 24 31. https://doi.org/10.1109/ICCV.2009.5459205
PAGE 150
150 Schwarz, M., Milan, A., Periyasamy, A. S., & Behnke, S. (2017). RGB D object detection and semantic segmentation for autonomous manipulation in clutter. Https://Doi.Org/10.1177/0278364917713117, 37(4 5), 437 451. https://doi.org/10.1177/0278364917713117 Seol, S., Lee, E. K., & Kim, W. (2017). Indoor mobile object tracking using RF ID. Future Generation Computer Systems, 76, 443 451. https://doi.org/10.1016/J.FUTURE.2016.08.005 Servi, M., Mussi, E., Profili, A., Furferi, R., Volpe, Y., Governi, L., & Buonamici, F. (2021a). Metrological Characterization and Comparison of D415, D455, L 515 RealSense Devices in the Close Range. Sensors 2021, Vol. 21, Page 7770, 21(22), 7770. https://doi.org/10.3390/S21227770 Sewio. (n.d.). Real time location system for indoor tracking in industry Sewio RTLS. Retrieved October 23, 2022, from https://www. sewio.net/rtls in industry/ Sharma, B. (2021, February 10). What is LiDAR technology and how does it work? https://www.geospatialworld.net/blogs/what is lidar technology and how does it work/ Sharma, P., & Valles, D. (2020). Backbone Neural Network Design of Single Shot Detector from RGB D Images for Object Detection. 2020 11th IEEE Annual Ubiquitous Computing, Electronics and Mobile Communication Conference, UEMCON 2020, 0112 0117. https://doi.org/10.1109/UEMCON51285.2020.9298175 Shen, X., Awolusi, I., & M arks, E. (2017). Construction Equipment Operator Physiological Data Assessment and Tracking. Practice Periodical on Structural Design and Construction, 22(4). https://doi.org/10.1061/(ASCE)SC.1943 5576.0000329 Shih, N. (2017). The Application of 3 D Scanner in the Representation of Building Construction Site. Proceedings of the 19th International Symposium on Automation and Robotics in Construction (ISARC). https://doi.org/10.22260/ISARC2002/0053 Shin, H., Hwang, H., Yoon, H., & Lee, S. (2019). Integr ation of deep learning based object recognition and robot manipulator for grasping objects. 2019 16th International Conference on Ubiquitous Robots, UR 2019, 174 178. https://doi.org/10.1109/URAI.2019.8768650 SICK LD MRS | AutonomouStuff. (n.d.). Retrieved September 29, 2022, from https://autonomoustuff.com/products/sick ld mrs Silberman, N., Hoiem, D., Kohli, P., & Fergus, R. (2012). Indoor segmentation and support inference from RGBD images. Lecture Notes in Computer Science, 7576 LNCS(PART 5), 746 760. h ttps://doi.org/10.1007/978 3 642 33715 4_54
PAGE 151
151 Simultaneous tracking and shape estimation with laser scanners | IEEE Conference Publication | IEEE Xplore. (n.d.). Retrieved September 19, 2022, from https://ieeexplore.ieee.org/abstract/document/6641087?casa_to ken=DTl5K1oxF HYAAAAA:O7We JPYKpBsTI3nAFgG150xJvyHP6niu_YLxMGFY5an Q ZKdWQF8A2_tw1 sCteHqTZXi27A Song, K. T., Chang, Y. H., & Chen, J. H. (2019). 3D vision for object grasp and obstacle avoidance of a collaborative robot. IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, 2019 July, 254 258. https://doi.org/10.1109/AIM.2019.8868694 Steich, K., Kamel, M., Beardsley, P., Obrist, M. K., Siegwart, R., & Lachat, T. (2016). Tree cavity inspection using aerial robots. IEEE International C onference on Intelligent Robots and Systems, 2016 November, 4856 4862. https://doi.org/10.1109/IROS.2016.7759713 Steinbaeck, J., Druml, N., Tengg, A., Steger, C., & Hillbrand, B. (2018). Time of flight cameras for parking assistance: A feasibility study. A SDAM 2018 Proceedings: 12th International Conference on Advanced Semiconductor Devices and Microsystems. https://doi.org/10.1109/ASDAM.2018.8544683 Story, M., Webb, P., Fletcher, S. R., Tang, G., Jaksic, C., & Carberry, J. (2022). Do Speed and Proximity Affect Human Robot Collaboration with an Industrial Robot Arm? International Journal of Social Robotics, 14(4), 1087 1102. https://doi.org/10.1007/S12369 021 00853 Y/FIGURES/8 St Vincent, M., Chicoine, D., & Serge, S. (1996). Work Related Musculoskeletal D isorders (WMSDs). Tadic, V., Odry, Ã., Odry, A., Kecskes, I., Burkus, E., Kiraly, Z., & Odry, P. (2019). Modeling and Control of TWIP systems View project Instrumentation View project Application of Intel RealSense Cameras for Depth Image Generation in Rob otics. Robotics Article in WSEAS Transactions on Computers. https://www.researchgate.net/publication/336495781 Taneja, S., Akcamete, A., Akinci, B., Garrett, J. H., Akcamete, ; Asli, Asce, M., Soibelman, L., & East, E. W. (2011). Analysis of Three Indoor L ocalization Technologies for Supporting Operations and Maintenance Field Tasks Track Monitoring from In service Trains View project Drone mounted thermography for energetic district analysis View project Analysis of Three Indoor Localization Technologies f or Supporting Operations and Maintenance Field Tasks. Article in Journal of Computing in Civil Engineering. https://doi.org/10.1061/(ASCE)CP.1943 5487.0000177 Tang, S., Roberts, D., & Golparvar Fard, M. (2020). Human object interaction recognition for automatic construction site safety inspection. Automation in Construction, 120, 103356. https://doi.org/10.1016/J.AUTCON.2020.103356
PAGE 152
152 Tehrani, B. M., & Alwisy, A. (2022). Assessment of Exoskeletons for the Rehabilitation of Industrialized Co nstruction Workforce. Computing in Civil Engineering 2021, 313 320. https://doi.org/10.1061/9780784483893.039 T en Harkel, T. C., Speksnijder, C. M., van der Heijden, F., Beurskens, C. H. G., Ingels, K. J. A. O., & Maal, T. J. J. (2017). Depth accuracy of t he RealSense F200: Low cost 4D facial imaging. Scientific Reports 2017 7:1, 7(1), 1 8. https://doi.org/10.1038/s41598 017 16608 7 Kinect and Its Comparison to Kinect V1 and Kinect V2. Sensors 2021, Vol. 21, Page 413, 21(2), 413. https://doi.org/10.3390/S21020413 Tou, J. Y., Lau, P. Y., & Tay, H. Y. (2007). Computer Vision based Wood Recognition System. https://www.researchgate.net/publication/264886592_Computer_Vision bas ed_Wood_Recognition_System Ulrich, L., Vezzetti, E., Moos, S., & Marcolin, F. (2020). Analysis of RGB D camera technologies for supporting different facial usage scenarios. Multimedia Tools and Applications, 79(39 40), 29375 29398. https://doi.org/10.1007/ S11042 020 09479 0/TABLES/11 Ultrasonic Sensors in Robotics | Into Robotics. (2013, January 25). https://www.intorobotics.com/interfacing programming ultrasonic sensors tutorials resources/ Valero, E., Sivanathan, A., Bosché, F., & Abdel Wahab, M. (2017a). Analysis of construction trade worker body motions using a wearable and wireless motion sensor network. Automation in Construction, 83, 48 55. https://doi.org/10.1016/J.AUTCON.2017.08.001 V an Wyk, P. M., Weir, P. L., Andrews, D. M., Fiedler, K. M., & Call aghan, J. P. (2009). Determining the optimal size for posture categories used in video based posture assessment methods. Http://Dx.Doi.Org/10.1080/00140130902752118, 52(8), 921 930. https://doi.org/10.1080/00140130902752118 Vilaça, R., Ramos, J., Silva, V. , Sepúlveda, J., & Esteves, J. S. (2017). Mobile Platform Motion Control System Based on. Human Gestures International Journal of Mechatronics and Applied Mechanics, 1. Wang, D. Z., Posner, I., & Newman, P. (2015b). Model free detection and tracking of dyn amic objects with 2D lidar. International Journal of Robotics Research, 34(7), 1039 1063. https://doi.org/10.1177/0278364914562237/ASSET/IMAGES/LARGE/10.1177_0 278364914562237 FIG2.JPEG
PAGE 153
153 Wang, H., Liu, X., Yuan, X., & Liang, D. (2016). Multi perspective terrestrial LiDAR point cloud registration using planar primitives. 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 6722 6725. https://doi.org/10.1109/IGARSS.2016.7730755 Wang, L., Gao, R., Váncza, J., Krüger, J. , Wang, X. v., Makris, S., & Chryssolouris, G. (2019). Symbiotic human robot collaborative assembly. CIRP Annals, 68(2), 701 726. https://doi.org/10.1016/J.CIRP.2019.05.002 Wang, L., Liu, S., Liu, H., & Wang, X. V. (2020). Overview of human robot collabora tion in manufacturing. Lecture Notes in Mechanical Engineering, 15 58. https://doi.org/10.1007/978 3 030 46212 3_2 Wang, R., An, M., Shao, S., Yu, M., Wang, S., & Xu, X. (2021). Lidar Sensor Based Object Recognition Using Machine Learning. Journal of Russi an Laser Research 2021 42:4, 42(4), 484 493. https://doi.org/10.1007/S10946 021 09986 X Wang, X. V., Zhang, X., Yang, Y., & Wang, L. (2020). A Human Robot Collaboration System towards High Accuracy. Procedia CIRP, 93, 1085 1090. https://doi.org/10.1016/J.P ROCIR.2020.04.085 Wang, Y., Wang, C., Long, P., Gu, Y., & Li, W. (2021a). Recent advances in 3D object detection based on RGB D: A survey. Displays, 70, 102077. https://doi.org/10.1016/J.DISPLA.2021.102077 Wang, Z., Li, H., & Zhang, X. (2019). Construction waste recycling robot for nails and screws: Computer vision technology and neural network approach. Automation in Construction, 97, 220 228. https://doi.org/10.1016/J.AUTCON.2018.11.009 Wiggermann, N., Bradtmiller, B., Bunnell, S., Hildebrand, C., Archibe que, J., Ebert, S., Reed, M. P., & Jones, M. L. H. (2019). Anthropometric Dimensions of Individuals With High Body Mass Index. Human Factors, 61(8), 1277. https://doi.org/10.1177/0018720819839809 Willmann, J., Knauss, M., Bonwetsch, T., Apolinarska, A. A., Gramazio, F., & Kohler, M. (2016). Robotic timber construction Expanding additive fabrication to new dimensions. Automation in Construction, 61, 16 23. https://doi.org/10.1016/J.AUTCON.2015.09.011 Wu, T. E., Tsai, C. C., & Guo, J. I. (2018). LiDAR/camer a sensor fusion technology for pedestrian detection. Proceedings 9th Asia Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2017, 2018 February, 1675 1678. https://doi.org/10.1109/APSIPA.2017.8282301 Wu, T., Z heng, W., Yin, W., & Zhang, H. (2020). Development and Performance Evaluation of a Very Low Cost UAV Lidar System for Forestry Applications. Remote Sensing 2021, Vol. 13, Page 77, 13(1), 77. https://doi.org/10.3390/RS13010077
PAGE 154
154 Wuni, I. Y., & Shen, G. Q. (20 20). Barriers to the adoption of modular integrated construction: Systematic review and meta analysis, integrated conceptual framework, and strategies. Journal of Cleaner Production, 249, 119347. https://doi.org/10.1016/J.JCLEPRO.2019.119347 Xie, Y., Deng, L., Sun, T., Fu, Y., Li, J., Cui, X., Yin, H., Deng, S., Xiao, J., & Chen, B. (2022). A4LidarTag: Depth Based Fiducial Marker for Extrinsic Calibration of Solid State Lidar and Camera. IEEE Robotics and Automation Letters, 7(3), 6487 6494. https://do i.org/10.1109/LRA.2022.3173033 Xu, Y., Chen, J., Yang, Q., & Guo, Q. (2019). Human posture recognition and fall detection using kinect V2 camera. Chinese Control Conference, CCC, 2019 July, 8488 8493. https://doi.org/10.23919/CHICC.2019.8865732 Yang, L., C ao, J., Zhu, W., & Tang, S. (2015). Accurate and efficient object tracking based on passive RFID. IEEE Transactions on Mobile Computing, 14(11), 2188 2200. https://doi.org/10.1109/TMC.2014.2381232 Yang, Y., Yan, H., Dehghan, M., & Ang, M. H. (2015). Real t ime human robot interaction in complex environment using kinect v2 image recognition. Proceedings of the 2015 7th IEEE International Conference on Cybernetics and Intelligent Systems, CIS 2015 and Robotics, Automation and Mechatronics, RAM 2015, 112 117. h ttps://doi.org/10.1109/ICCIS.2015.7274606 Yaskawa. (2022). Articulated Robots. https://www.yaskawa.com/products/robotics/robots with iec/articulated robots Ye, Y., Fu, L., & Li, B. (2016). Object detection and tracking using multi layer laser for autonomou s urban driving. IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 259 264. https://doi.org/10.1109/ITSC.2016.7795564 Yin, L., Jin, S., Tian, C., Ma, X., & Ou, Y. (2019). A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low Cost LiDAR and Vision Fusion. Applied Sciences 2019, Vol. 9, Page 2105, 9(10), 2105. https://doi.org/10.3390/APP9102105 Yin, X., Liu, H., Chen, Y., & Al Hussein, M. (2019). Building information modelling for off site construc tion: Review and future directions. Automation in Construction, 101, 72 91. https://doi.org/10.1016/J.AUTCON.2019.01.010 Yoon, J. S., Bae, S. H., & Kuc, T. Y. (2020). Human recognition and tracking in narrow indoor environment using 3D lidar sensor. Intern ational Conference on Control, Automation and Systems, 2020 October, 978 981. https://doi.org/10.23919/ICCAS50221.2020.9268208
PAGE 155
155 You, S., Kim, J. H., Lee, S., Kamat, V., & Robert, L. P. (2018). Enhancing perceived safety in human robot collaborative constru ction using immersive virtual environments. Automation in Construction, 96, 161 170. https://doi.org/10.1016/j.autcon.2018.09.008 Yu, H., Fu, Q., Yang, Z., Tan, L., Sun, W., & Sun, M. (2019a). Robust Robot Pose Estimation for Challenging Scenes With an RGB D Camera. Undefined, 19(6), 2217 2229. https://doi.org/10.1109/JSEN.2018.2884321 Yu, Y., Asce, S. M., Yang, ; Xincong, Li, ; Heng, Luo, X., Guo, H., & Fang, Q. (2019). Joint Level Vision Based Ergonomic Assessment Tool for Construction Workers. Journal of Construction Engineering and Management, 145(5), 04019025. https://doi.org/10.1061/(ASCE)CO.1943 7862.0001647 Zhang, D., Xia, F., Yang, Z., Yao, L., & Zhao, W. (2010). Localization technologies for indoor human tracking. 2010 5th International C onference on Future Information Technology, FutureTech 2010 Proceedings. https://doi.org/10.1109/FUTURETECH.2010.5482731 Zhang, H., Zheng, J., Dorr, G., Zhou, H., Ge, Y., Zhang, H., Zheng, J., Zhou, · H, Dorr, · G, & Ge, Y. (2013). Testing of GPS Accurac y for Precision Forestry Applications. Arabian Journal for Science and Engineering 2013 39:1, 39(1), 237 245. https://doi.org/10.1007/S13369 013 0861 1 Zhang, L., Li, Q., Li, M., Mao, Q., & Nüchter, A. (2013). Multiple Vehicle like Target Tracking Based on the Velodyne LiDAR. IFAC Proceedings Volumes, 46(10), 126 131. https://doi.org/10.3182/20130626 3 AU 2035.00058 Zhang, M., Cao, T., & Zhao, X. (2017a). Applying Sensor Based Technology to Improve Construction Safety Management. Sensors (Basel, Switzerland ), 17(8). https://doi.org/10.3390/S17081841 Zhang, W., Lee, M. W., Jaillon, L., & Poon, C. S. (2018). The hindrance to using 204, 70 81. https://doi.org/10.1016/J.JCLEPRO.2018. 08.190 Zhang, W., Zhou, C., Yang, J., & Huang, K. (2018). LiSeg: Lightweight Road object Semantic Segmentation in 3D LiDAR Scans for Autonomous Driving. IEEE Intelligent Vehicles Symposium, Proceedings, 2018 June, 1021 1026. https://doi.org/10.1109/IVS.201 8.8500701 Zhang, Z., Liang, Z., Zhang, M., Zhao, X., Li, H., Yang, M., Tan, W., & Pu, S. (2022). RangeLVDet: Boosting 3D Object Detection in LIDAR with Range Image and RGB Image. IEEE Sensors Journal, 22(2), 1391 1403. https://doi.org/10.1109/JSEN.2021.312 7626
PAGE 156
156 Zhao, X., Sun, P., Xu, Z., Min, H., & Yu, H. (2020). Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications. IEEE Sensors Journal, 20(9), 4901 4913. https://doi.org/10.1109/JSEN.2020.2966034 Zhou, Z., Li, L., Wang, R., & Zhang, X. (2021). Deep Learning on 3D Object Detection for Automatic Plug in Charging Using a Mobile Manipulator. Proceedings IEEE International Conference on Robotics and Automation, 2021 May, 4148 4154. https://doi.org/10.1109/ICRA48506.2021.9561 106 Zhu, Z., & Brilakis, I. (2010). Parameter optimization for automated concrete detection in image data. Automation in Construction, 19(7), 944 953. https://doi.org/10.1016/J.AUTCON.2010.06.008 Zoghlami, F., Kaden, M., Villmann, T., Schneider, G., & Hein rich, H. (2021). Sensors data fusion for smart decisions making: A novel bi functional system for the evaluation of sensors contribution in classification problems. Proceedings of the IEEE International Conference on Industrial Technology, 2021 March, 1417 1423. https://doi.org/10.1109/ICIT46573.2021.9453551
PAGE 157
157 BIOGRAPHICAL SKETCH Kush received his Master o f Science in Construction Management at the Deendayal Energy University, India. The ideology that innovation is the way forward lies at the core of his belief. This has led to his research interests in the development of a smart control system that integrates computer vision and sensing technologies to promote a safe human robot collaboration (HRC) in industrialized construction. At present, he is investigating innovative control sy stems and advanced sensing technologies to support the robotic station.
|