ShipsSatelliteImageClassification Main File Update
This commit is contained in:
@@ -651,12 +651,126 @@
|
||||
"source": [
|
||||
"#### Explanation of features\n",
|
||||
"\n",
|
||||
"1. Conv2D - This is a 2 dimensional convolutional layer, the number of filters decide what the convolutional layer learns. Greater the number of filters, greater the amount of information obtained. <img src=\"https://github.com/psavarmattas/Machine-Learning-Models/blob/2308384eeb0e7e5b6e16a9d76698a59bc08b9bff/ShipsSatelliteImageClassification/assets/keras_conv2d_num_filters.png\" alt=\"MarineGEO circle logo\" style=\"height: 100px; width:100px;\"/>\n",
|
||||
"1. Conv2D - This is a 2 dimensional convolutional layer, the number of filters decide what the convolutional layer learns. Greater the number of filters, greater the amount of information obtained.\n",
|
||||
"2. MaxPooling2D - This reduces the spatial dimensions of the feature map produced by the convolutional layer without losing any range information. This allows a model to become slightly more robust\n",
|
||||
"3. Dropout - This removes a user-defined percentage of links between neurons of consecutive layers. This allows the model to be robust. It can be used in both fully convolutional layers and fully connected layers.\n",
|
||||
"4. BatchNormalization - This layer normalises the values present in the hidden part of the neural network. This is similar to MinMax/Standard scaling applied in machine learning algorithms\n",
|
||||
"5. Padding- This pads the feature map/input image with zeros allowing border features to stay."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def conv_block(X,k,filters,stage,block,s=2):\n",
|
||||
" \n",
|
||||
" conv_base_name = 'conv_' + str(stage)+block+'_branch'\n",
|
||||
" bn_base_name = 'bn_'+str(stage)+block+\"_branch\"\n",
|
||||
" \n",
|
||||
" F1 = filters\n",
|
||||
" \n",
|
||||
" X = Conv2D(filters=F1, kernel_size=(k,k), strides=(s,s),\n",
|
||||
" padding='same',name=conv_base_name+'2a')(X)\n",
|
||||
" X = BatchNormalization(name=bn_base_name+'2a')(X)\n",
|
||||
" X = Activation('relu')(X)\n",
|
||||
" \n",
|
||||
" return X\n",
|
||||
" pass"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Creation of Model\n",
|
||||
"\n",
|
||||
"### Block 1\n",
|
||||
"\n",
|
||||
"1. An input layer is initialised using the Input Keras layer, this defines the number of neurons present in the input layer\n",
|
||||
"2. ZeroPadding is applied to the input image, so that boundary features are not lost.\n",
|
||||
"\n",
|
||||
"### Block 2\n",
|
||||
"\n",
|
||||
"First Convolutioanl Layer, it starts with 16 filters and kernel size with (3,3) and strides (2,2). Padding is maintaned same, so the image does not chaneg spatially, until the next block in which MaxPooling occurs\n",
|
||||
"\n",
|
||||
"### Block 3 - 4\n",
|
||||
"\n",
|
||||
"Similar structure in both with a convolutional layer followed by a MaxPooling and Dropout layers.\n",
|
||||
"\n",
|
||||
"### Output Block\n",
|
||||
"\n",
|
||||
"The feature map produced by the previous convolutional layers is converted into a single column using Flatten Layer and the classified using a Dense layer(output layer) with the number of classes present in the dataset, and sigmoid as activation function."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def basic_model(input_shape,classes):\n",
|
||||
" \n",
|
||||
" X_input = tf.keras.Input(input_shape)\n",
|
||||
" \n",
|
||||
" X = ZeroPadding2D((5,5))(X_input)\n",
|
||||
" \n",
|
||||
" X = Conv2D(16,(3,3),strides=(2,2),name='conv1',padding=\"same\")(X)\n",
|
||||
" X = BatchNormalization(name='bn_conv1')(X)\n",
|
||||
" \n",
|
||||
" # stage 2\n",
|
||||
" X = conv_block(X,3,32,2,block='A',s=1)\n",
|
||||
" X = MaxPooling2D((2,2))(X)\n",
|
||||
" X = Dropout(0.25)(X)\n",
|
||||
"\n",
|
||||
"# Stage 3\n",
|
||||
" X = conv_block(X,5,32,3,block='A',s=2)\n",
|
||||
" X = MaxPooling2D((2,2))(X)\n",
|
||||
" X = Dropout(0.25)(X)\n",
|
||||
" \n",
|
||||
"# Stage 4\n",
|
||||
" X = conv_block(X,3,64,4,block='A',s=1)\n",
|
||||
" X = MaxPooling2D((2,2))(X)\n",
|
||||
" X = Dropout(0.25)(X)\n",
|
||||
" \n",
|
||||
"# Output Layer\n",
|
||||
" X = Flatten()(X)\n",
|
||||
" X = Dense(64)(X)\n",
|
||||
" X = Dropout(0.5)(X)\n",
|
||||
" \n",
|
||||
" X = Dense(128)(X)\n",
|
||||
" X = Activation(\"relu\")(X)\n",
|
||||
" \n",
|
||||
" X = Dense(classes,activation=\"softmax\",name=\"fc\"+str(classes))(X)\n",
|
||||
" \n",
|
||||
" model = Model(inputs=X_input,outputs=X,name='Feature_Extraction_and_FC')\n",
|
||||
" \n",
|
||||
" return model\n",
|
||||
" pass"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"ename": "NameError",
|
||||
"evalue": "name 'tf' is not defined",
|
||||
"output_type": "error",
|
||||
"traceback": [
|
||||
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)",
|
||||
"\u001b[1;32mf:\\Machine Learning\\ML Project\\ShipsSatelliteImageClassification\\main.ipynb Cell 36\u001b[0m in \u001b[0;36m<cell line: 1>\u001b[1;34m()\u001b[0m\n\u001b[1;32m----> <a href='vscode-notebook-cell:/f%3A/Machine%20Learning/ML%20Project/ShipsSatelliteImageClassification/main.ipynb#X50sZmlsZQ%3D%3D?line=0'>1</a>\u001b[0m model \u001b[39m=\u001b[39m basic_model(input_shape\u001b[39m=\u001b[39;49m(\u001b[39m48\u001b[39;49m,\u001b[39m48\u001b[39;49m,\u001b[39m3\u001b[39;49m),classes\u001b[39m=\u001b[39;49m\u001b[39m2\u001b[39;49m)\n",
|
||||
"\u001b[1;32mf:\\Machine Learning\\ML Project\\ShipsSatelliteImageClassification\\main.ipynb Cell 36\u001b[0m in \u001b[0;36mbasic_model\u001b[1;34m(input_shape, classes)\u001b[0m\n\u001b[0;32m <a href='vscode-notebook-cell:/f%3A/Machine%20Learning/ML%20Project/ShipsSatelliteImageClassification/main.ipynb#X50sZmlsZQ%3D%3D?line=0'>1</a>\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mbasic_model\u001b[39m(input_shape,classes):\n\u001b[1;32m----> <a href='vscode-notebook-cell:/f%3A/Machine%20Learning/ML%20Project/ShipsSatelliteImageClassification/main.ipynb#X50sZmlsZQ%3D%3D?line=2'>3</a>\u001b[0m X_input \u001b[39m=\u001b[39m tf\u001b[39m.\u001b[39mkeras\u001b[39m.\u001b[39mInput(input_shape)\n\u001b[0;32m <a href='vscode-notebook-cell:/f%3A/Machine%20Learning/ML%20Project/ShipsSatelliteImageClassification/main.ipynb#X50sZmlsZQ%3D%3D?line=4'>5</a>\u001b[0m X \u001b[39m=\u001b[39m ZeroPadding2D((\u001b[39m5\u001b[39m,\u001b[39m5\u001b[39m))(X_input)\n\u001b[0;32m <a href='vscode-notebook-cell:/f%3A/Machine%20Learning/ML%20Project/ShipsSatelliteImageClassification/main.ipynb#X50sZmlsZQ%3D%3D?line=6'>7</a>\u001b[0m X \u001b[39m=\u001b[39m Conv2D(\u001b[39m16\u001b[39m,(\u001b[39m3\u001b[39m,\u001b[39m3\u001b[39m),strides\u001b[39m=\u001b[39m(\u001b[39m2\u001b[39m,\u001b[39m2\u001b[39m),name\u001b[39m=\u001b[39m\u001b[39m'\u001b[39m\u001b[39mconv1\u001b[39m\u001b[39m'\u001b[39m,padding\u001b[39m=\u001b[39m\u001b[39m\"\u001b[39m\u001b[39msame\u001b[39m\u001b[39m\"\u001b[39m)(X)\n",
|
||||
"\u001b[1;31mNameError\u001b[0m: name 'tf' is not defined"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"model = basic_model(input_shape=(48,48,3),classes=2)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
Reference in New Issue
Block a user