Sharing work in progress on the automatic garbage segregation system. This operates as an independent unit in addition to Ramudorid ( a road cleaning robot).
The system deals with incorporating image and edge detection algorithm over incoming video media to classify and identify items into one of the following categories
- Recyclable waste – paper , cardboard , leaves , sticks ..
- Non – Recyclable waste – plastic , bottles , wrappers ..
- E waste – ICs , computer peripherals , home appliances ..
- Dust and Sand
- Unidentified / unclassified objects
Components :
- High definition camera
- God lighting condition
- Grid marked conveyor belt
- image processing algorithm Open CV
- Object identification algorithm – AI
- feedback from manually classified unidentified objects – Machine learning
Challenges:
- Difficulty in identification if the objects are broken beyond identification or molted together with other garbage
- Very minute particles such as glitter and thermocol balls cannot be segregated in this fashion
Tools and techniques :
1.Media Streaming on vp8
Using raspi cam and webrtc peer to streaming network over v8 video codec we achieve high frame rate to capture and send images for analysis by the backend analytics engine.
2. Robot arm for lifting using arduino
-tbd-
flex / EMG sensors
string controlled by servos
change position on flexing or bending
3. Apache spark ML
Machine learning algorithms can be broken down into supervised and unsupervised learning . Supervised learning has linear of logistic regression and classification is form of Naive Bayes probabilistic model , support vector machines model ( SVM ) or Random Decision Forest . Whereas unsupervised learning works on dimensionality reduction such as principle components analysis or Single Value Decomposition. Often unsupervised learning is using clustering K means algorithms.
From among the 3 types of ML ( clustering , classifications and collaborative filtering ) , we are using classification approach to identify every object’s characteristics on conveyer belt .
As part of the implementation decision tress are created for evaluation using branches and nodes.
Snippets from programs :
step 1 : create environment for Spark , preferably 8GB ram Ubuntu
step 2 : Imports for Scala program
org.apache.spark._
org.apache.spark.rdd.RDD
org.apache.spark.mllib.regression.LabeledPoint
org.apache.spark.mllib.linalg.Vectors
org.apache.spark.mllib.tree.DecisionTree
org.apache.spark.mllib.tree.model.DecisionTreeModel
org.apache.spark.mllib.util.MLUtils
step 3 : Load the data from identified objects into for the robotic arm to learn its coordinated , pick it up and put it in one of the bins . For this use the collected data to insert into RDD class.
Step 4 : Extract features to build a classifier model -> RDD containing feature array
Step 5 : RDD containing features array – > RDD containing labelled points
Step 6 : train the model using the DecisionTree.trainClassifier method which returns a DecisionTreeModel
Parameters : MaxDepth , maxBisn , maxImpurity ,
var categoricalFeaturesInfo = MapInt, Int
val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins)
model.toDebugString // prints out the decision tree
Step 7 : Use model.predict to test the data
Predicted value for a node param: predict predicted value param: prob probability of the label (classification only)
Reference : https://spark.apache.org/docs/latest/api/java/index.html