This coursework constitutes 90% of your final mark for this module, where there

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

This coursework constitutes 90% of your final mark for this module, where there

This coursework constitutes 90% of your final mark for this module, where there are two mandatory
tasks: Python programming and report writing. You must upload your work to Ultra before the
deadline specified on the cover page.
2.1 Task 1 – Python Programming (40% subtotal)
In this coursework, you are given a set of 3D point-clouds with appearance features (i.e. RGB values).
These point-clouds were collected using a Kinect system in a mysterious PhD Lab (see Figure.1). Several
virtual objects are also positioned among those point clouds. Your task is to write a Python program
that can automatically detect those objects from an image and use them as anchors to collect the objects
and navigate through the 3D scene. If you land close enough to the object it will be automatically
captured and removed from the scene. A set of example images that contain those virtual objects are
provided. These example images are used to train a classifier (basic solution) and an object detector
(advanced solution) using deep learning approaches in order to locate the targets. You are required to
attempt both basic and advance solutions. “PacMan Helper.py” provides some basic functions to
help you complete the task. “PacMan Helper Demo.ipynb” demonstrates how to use these functions to
obtain a 2D image by projecting 3D point-clouds onto the camera image-plane, and how to re-position
and rotate the camera etc. All the code and data are available on Ultra. You are encouraged to read
the given source codes, particularly “PacMan Skeleton.ipynb”.
Detection Solution using Basic Binary Classifier (10%). Implement a deep neural network model
that can classify the image patch into two categories: target object and background. You can use
the given images to train your neural network. It then can be used in a sliding window fashion to
detect the target object in a given image.
Detection Solution using Advance Object Detector (10%). Implement a deep neural network
model that can detect the target object from the image. You may manually or automatically
create your own dataset for training the detector. The detector will predict bounding boxes that
contain the object from a given image.
Navigation and Collection Task Completion (10%). There are 11 target objects in the scene. Use
the trained models to perform scene navigation and object collection. If you land close enough to
the object it will be automatically captured and removed from the scene. You may compare the
performance of both models.
Visualisation, Coding Style, and Readability (10%). Visualise the data and your experimental
results wherever is appropriate. The code should be well structured with sufficient comments for
the essential parts to make the implementation of your experiments easy to read and understand.
Check the “Google Python Style Guide”3
for guidance. 
2.2 Task 2 – Report Writing (50% subtotal)
You will also write a report (maximum five pages) on your work, which you will submit to Ultra
alongside your code. The report must contain the following structure:
Introduction and Method (10%). Introduce the task and contextualise the given problem. Make
sure to include a few references to previously published work in the field, where you should demon
strate an awareness of the relevant research works. Describe the model(s) and approaches you
used to undertake the task. Any decisions on hyper-parameters must be stated here, including
motivation for your choices where applicable. If the basis of your decision is experimentation with
a number of parameters, then state this.
Result and Discussion(10)%). Describe, compare and contrast the results you obtained on your
model(s). Any relationships in the data should be outlined and pointed out here. Only the most
important conclusions should be mentioned in the text. By using tables and figures to support the
section, you can avoid describing the results fully. Describe the outcome of the experiment and the
conclusion that you can draw from these results.
Robot Design (20%). Consider designing an autonomous robot to undertake the given task in the
real scene. Discuss the foreseen challenges and propose your design, including robot mechanic
configuration, hardware and algorithms for robot sensing and controlling, and system efficiency etc.
Provide appropriate justifications for your design choices with evidence from existing literature.
You may use simulators such as “CoppeliaSim Edu” or “Gazebo” for visualising your design.
3https://google.github.io/styleguide/pyguide.html
2Format, Writing Style, and Presentation (10%). Language usage and report format should be in
a professional standard and meet the academic writing criteria, with the explanation appropriately
divided as per the structure described above. Tables, figures, and references should be included
and cited where appropriate. A guide of citation style can be found at library guide4

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now