Two datasets & New model update
@@ -1,498 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Pneumonia Classification Model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Introduction + Set-up"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Machine learning has a phenomenal range of applications, including in health and diagnostics. This tutorial will explain the complete pipeline from loading data to predicting results, and it will explain how to build an X-ray image classification model from scratch to predict whether an X-ray scan shows presence of pneumonia. This is especially useful during these current times as COVID-19 is known to cause pneumonia.\n",
|
||||
"\n",
|
||||
"This tutorial will explain how to utilize TPUs efficiently, load in image data, build and train a convolution neural network, finetune and regularize the model, and predict results. Data augmentation is not included in the model because X-ray scans are only taken in a specific orientation, and variations such as flips and rotations will not exist in real X-ray images.\n",
|
||||
"\n",
|
||||
"Run the following cell to load the necessary packages. Make sure to change the Accelerator on the right to TPU."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"TensorFlow is a powerful tool to develop any machine learning pipeline, and today we will go over how to load Image+CSV combined datasets, how to use Keras preprocessing layers for image augmentation, and how to use pre-trained models for image classification.\n",
|
||||
"\n",
|
||||
"Skeleton code for the DataGenerator Sequence subclass is credited to Xie29's NB.\n",
|
||||
"\n",
|
||||
"Run the following cell to import the necessary packages. We will be using the GPU accelerator to efficiently train our model. Remember to change the accelerator on the right to GPU. We won't be using a TPU for this notebook because data generators are not safe to run on multiple replicas. If a TPU is not used, change the TPU_used variable to False."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Tensorflow version : 2.9.1\n",
|
||||
"Number of replicas: 1\n",
|
||||
"2.9.1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from email.mime import image\n",
|
||||
"import os\n",
|
||||
"import PIL\n",
|
||||
"import time\n",
|
||||
"import math\n",
|
||||
"import warnings\n",
|
||||
"import numpy as np\n",
|
||||
"import pandas as pd\n",
|
||||
"import tensorflow as tf\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"from keras.utils import load_img\n",
|
||||
"\n",
|
||||
"SEED = 1337\n",
|
||||
"print('Tensorflow version : {}'.format(tf.__version__))\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" tpu = tf.distribute.cluster_resolver.TPUClusterResolver()\n",
|
||||
" tf.config.experimental_connect_to_cluster(tpu)\n",
|
||||
" tf.tpu.experimental.initialize_tpu_system(tpu)\n",
|
||||
" strategy = tf.distribute.experimental.TPUStrategy(tpu)\n",
|
||||
"except ValueError:\n",
|
||||
" strategy = tf.distribute.get_strategy() # for CPU and single GPU\n",
|
||||
" print('Number of replicas:', strategy.num_replicas_in_sync)\n",
|
||||
" \n",
|
||||
"print(tf.__version__)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data loading"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The Chest X-ray data we are using from Cell divides the data into train, val, and test files. There are only 16 files in the validation folder, and we would prefer to have a less extreme division between the training and the validation set. We will append the validation files and create a new split that resembes the standard 80:20 division instead.\n",
|
||||
"\n",
|
||||
"The first step is to load in our data. The original PANDA dataset contains large images and masks that specify which area of the mask led to the ISUP grade (determines the severity of the cancer). Since the original images contain a lot of white space and extraneous data that is not necessary for our model, we will be using tiles to condense the images. Basically, the tiles are small sections of the masked areas, and these tiles can be concatenated together so the only the masked sections of the original image remains."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"MAIN_DIR = '../input/prostate-cancer-grade-assessment'\n",
|
||||
"TRAIN_IMG_DIR = '../input/panda-tiles/train'\n",
|
||||
"TRAIN_MASKS_DIR = '../input/panda-tiles/masks'\n",
|
||||
"train_csv = pd.read_csv(os.path.join(MAIN_DIR, 'train.csv'))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Some of the images could not be converted to tiles because the masks were too small or the image was too noisy. We need to take these images out of our DataFrame so that we do not run into a `FileNotFoundError`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"valid_images = tf.io.gfile.glob(TRAIN_IMG_DIR + '/*_0.png')\n",
|
||||
"img_ids = train_csv['image_id']"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"for img_id in img_ids:\n",
|
||||
" file_name = TRAIN_IMG_DIR + '/' + img_id + '_0.png'\n",
|
||||
" if file_name not in valid_images:\n",
|
||||
" train_csv = train_csv[train_csv['image_id'] != img_id]\n",
|
||||
" \n",
|
||||
"radboud_csv = train_csv[train_csv['data_provider'] == 'radboud']\n",
|
||||
"karolinska_csv = train_csv[train_csv['data_provider'] != 'radboud']"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We want both our training dataset and our validation dataset to contain images from both the Karolinska Institute and Radboud University Medical Center data providers. The following cell will split the each datafram into a 80:20 training:validation split."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"r_train, r_test = train_test_split(\n",
|
||||
" radboud_csv,\n",
|
||||
" test_size=0.2, random_state=SEED\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"k_train, k_test = train_test_split(\n",
|
||||
" karolinska_csv,\n",
|
||||
" test_size=0.2, random_state=SEED\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Concatenate the dataframes from the two different providers and we have our training dataset and our validation dataset."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"train_df = pd.concat([r_train, k_train])\n",
|
||||
"valid_df = pd.concat([r_test, k_test])\n",
|
||||
"\n",
|
||||
"print(train_df.shape)\n",
|
||||
"print(valid_df.shape)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Generally, it is better practice to specify constant variables than it is to hard-code numbers. This way, changing parameters is more efficient and complete. Specfiy some constants below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"IMG_DIM = (1536, 128)\n",
|
||||
"CLASSES_NUM = 6\n",
|
||||
"BATCH_SIZE = 32\n",
|
||||
"EPOCHS = 100\n",
|
||||
"N=12\n",
|
||||
"\n",
|
||||
"LEARNING_RATE = 1e-4\n",
|
||||
"FOLDED_NUM_TRAIN_IMAGES = train_df.shape[0]\n",
|
||||
"FOLDED_NUM_VALID_IMAGES = valid_df.shape[0]\n",
|
||||
"STEPS_PER_EPOCH = FOLDED_NUM_TRAIN_IMAGES // BATCH_SIZE\n",
|
||||
"VALIDATION_STEPS = FOLDED_NUM_VALID_IMAGES // BATCH_SIZE"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `tf.keras.utils.Sequence` is a base object to fit a dataset. Since our dataset is stored both as images and as a csv, we will have to write a DataGenerator that is a subclass of the Sequence class. The DataGenerator will concatenate all the tiles from each original image into a newer image of just the masked areas. It will also get the label from the ISUP grade column and convert it to a one-hot encoding. One-hot encoding is necessary because the ISUP grade is not a continuous datatype but a categorical datatype."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class DataGenerator(tf.keras.utils.Sequence):\n",
|
||||
" \n",
|
||||
" def __init__(self,\n",
|
||||
" image_shape,\n",
|
||||
" batch_size, \n",
|
||||
" df,\n",
|
||||
" img_dir,\n",
|
||||
" mask_dir,\n",
|
||||
" is_training=True\n",
|
||||
" ):\n",
|
||||
" \n",
|
||||
" self.image_shape = image_shape\n",
|
||||
" self.batch_size = batch_size\n",
|
||||
" self.df = df\n",
|
||||
" self.img_dir = img_dir\n",
|
||||
" self.mask_dir = mask_dir\n",
|
||||
" self.is_training = is_training\n",
|
||||
" self.indices = range(df.shape[0])\n",
|
||||
" \n",
|
||||
" def __len__(self):\n",
|
||||
" return self.df.shape[0] // self.batch_size\n",
|
||||
" \n",
|
||||
" def on_epoch_start(self):\n",
|
||||
" if self.is_training:\n",
|
||||
" np.random.shuffle(self.indices)\n",
|
||||
" \n",
|
||||
" def __getitem__(self, index):\n",
|
||||
" batch_indices = self.indices[index * self.batch_size : (index+1) * self.batch_size]\n",
|
||||
" image_ids = self.df['image_id'].iloc[batch_indices].values\n",
|
||||
" batch_images = [self.__getimages__(image_id) for image_id in image_ids]\n",
|
||||
" batch_labels = [self.df[self.df['image_id'] == image_id]['isup_grade'].values[0] for image_id in image_ids]\n",
|
||||
" batch_labels = tf.one_hot(batch_labels, CLASSES_NUM)\n",
|
||||
" \n",
|
||||
" return np.squeeze(np.stack(batch_images).reshape(-1, 1536, 128, 3)), np.stack(batch_labels)\n",
|
||||
" \n",
|
||||
" def __getimages__(self, image_id):\n",
|
||||
" fnames = [image_id+'_'+str(i)+'.png' for i in range(N)]\n",
|
||||
" images = []\n",
|
||||
" for fn in fnames:\n",
|
||||
" img = np.array(PIL.Image.open(os.path.join(self.img_dir, fn)).convert('RGB'))[:, :, ::-1]\n",
|
||||
" images.append(img)\n",
|
||||
" result = np.stack(images).reshape(1, 1536, 128, 3) / 255.0\n",
|
||||
" return result"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We will use the DataGenerator to create a generator for our training dataset and for our validation dataset. At each iteration of the generator, the generator will return a batch of images."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"train_generator = DataGenerator(image_shape=IMG_DIM,\n",
|
||||
" batch_size=BATCH_SIZE,\n",
|
||||
" df=train_df,\n",
|
||||
" img_dir=TRAIN_IMG_DIR,\n",
|
||||
" mask_dir=TRAIN_MASKS_DIR)\n",
|
||||
"\n",
|
||||
"valid_generator = DataGenerator(image_shape=IMG_DIM,\n",
|
||||
" batch_size=BATCH_SIZE,\n",
|
||||
" df=valid_df,\n",
|
||||
" img_dir=TRAIN_IMG_DIR,\n",
|
||||
" mask_dir=TRAIN_MASKS_DIR)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Visualize our input data\n",
|
||||
"\n",
|
||||
"Run the following cell to define the method to visualize our input data. This method displays the new images and their corresponding ISUP grade."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def show_tiles(image_batch, label_batch):\n",
|
||||
" plt.figure(figsize=(20,20))\n",
|
||||
" for n in range(10):\n",
|
||||
" ax = plt.subplot(1,10,n+1)\n",
|
||||
" plt.imshow(image_batch[n])\n",
|
||||
" decoded = np.argmax(label_batch[n])\n",
|
||||
" plt.title(decoded)\n",
|
||||
" plt.axis(\"off\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"image_batch, label_batch = next(iter(train_generator))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The following 12 tiles were from a single image but has been converted to 12 tiles to reduce white space. We see that only the sections that led to the ISUP grade has been preserved."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"show_tiles(image_batch, label_batch)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Build our model + Data augmentation\n",
|
||||
"\n",
|
||||
"We will be utilizing the Xception pre-trained model to classify our data. The PANDA competition scores submissions using the quadratic weighted kappa. The TensorFlow add-on API contains the Cohen Kappa loss and metric functions. Since we want to use the newest version of TensorFlow through tf-nightly to utilize the pretrained EfficientNet model, we will refrain from using the TFA API as it has not been moved over yet to the tf-nightly version. However, feel free to create your own Cohen Kappa Metric and Loss class using the TensorFlow API."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Data augmentation is helpful when dealing with image data as it prevents overfitting. Data augmentation introduces artificial but realistic variance in our images so that our model can learn from more features. Keras has recently implemented `keras.layers.preprocessing` that allows the model to streamline the data augmentation process."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Since the base model has already been trained with imagenet weights, we do not want to weights to change, so the base mode must not be trainable. However, the number of classes that our model has differs from the original model. Therefore, we do not want to include the top layers because we will add our own Dense layer that has the same number of nodes as our output class."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def make_model():\n",
|
||||
" data_augmentation = tf.keras.Sequential([\n",
|
||||
" tf.keras.layers.experimental.preprocessing.RandomContrast(0.15, seed=SEED),\n",
|
||||
" tf.keras.layers.experimental.preprocessing.RandomFlip(\"horizontal\", seed=SEED),\n",
|
||||
" tf.keras.layers.experimental.preprocessing.RandomFlip(\"vertical\", seed=SEED),\n",
|
||||
" tf.keras.layers.experimental.preprocessing.RandomTranslation(0.1, 0.1, seed=SEED)\n",
|
||||
" ])\n",
|
||||
" \n",
|
||||
" base_model = tf.keras.applications.VGG16(input_shape=(*IMG_DIM, 3),\n",
|
||||
" include_top=False,\n",
|
||||
" weights='imagenet')\n",
|
||||
" \n",
|
||||
" base_model.trainable = True\n",
|
||||
" \n",
|
||||
" model = tf.keras.Sequential([\n",
|
||||
" data_augmentation,\n",
|
||||
" \n",
|
||||
" base_model,\n",
|
||||
" \n",
|
||||
" tf.keras.layers.GlobalAveragePooling2D(),\n",
|
||||
" tf.keras.layers.Dense(16, activation='relu'),\n",
|
||||
" tf.keras.layers.BatchNormalization(),\n",
|
||||
" tf.keras.layers.Dense(CLASSES_NUM, activation='softmax'),\n",
|
||||
" ])\n",
|
||||
" \n",
|
||||
" model.compile(optimizer=tf.keras.optimizers.RMSprop(),\n",
|
||||
" loss='categorical_crossentropy',\n",
|
||||
" metrics=tf.keras.metrics.AUC(name='auc'))\n",
|
||||
" \n",
|
||||
" return model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's build our model!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Training the model\n",
|
||||
"\n",
|
||||
"And now let's train it! Learning rate is a very important hyperparameter, and it can be difficult to choose the \"right\" one. A learning rate that it too high will prevent the model from converging, but one that is too low will be far too slow. We will utilize multiple callbacks, using the `tf.keras` API to make sure that we are using an ideal learning rate and to prevent the model from overfitting. We can also save our model so that we do not have to retrain it next time."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def exponential_decay(lr0, s):\n",
|
||||
" def exponential_decay_fn(epoch):\n",
|
||||
" return lr0 * 0.1 **(epoch / s)\n",
|
||||
" return exponential_decay_fn\n",
|
||||
"\n",
|
||||
"exponential_decay_fn = exponential_decay(0.01, 20)\n",
|
||||
"\n",
|
||||
"lr_scheduler = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn)\n",
|
||||
"\n",
|
||||
"checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\"panda_model.h5\",\n",
|
||||
" save_best_only=True)\n",
|
||||
"\n",
|
||||
"early_stopping_cb = tf.keras.callbacks.EarlyStopping(patience=10,\n",
|
||||
" restore_best_weights=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"history = model.fit(\n",
|
||||
" train_generator, epochs=EPOCHS,\n",
|
||||
" steps_per_epoch=STEPS_PER_EPOCH,\n",
|
||||
" validation_data=valid_generator,\n",
|
||||
" validation_steps=VALIDATION_STEPS,\n",
|
||||
" callbacks=[checkpoint_cb, early_stopping_cb, lr_scheduler]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Predict results\n",
|
||||
"\n",
|
||||
"For this competition, the test dataset is not available to us. But I wish you all the best of luck, and hopefully this NB served as a helpful tutorial to help you get started."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.8.13 ('base')",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.13"
|
||||
},
|
||||
"orig_nbformat": 4,
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "5819c1eaf6d552792a1bbc5e8998e6c2149ab26a1973a0d78107c0d9954e5ba0"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
After Width: | Height: | Size: 825 B |
|
After Width: | Height: | Size: 750 B |
|
After Width: | Height: | Size: 702 B |
|
After Width: | Height: | Size: 702 B |
|
After Width: | Height: | Size: 611 B |
|
After Width: | Height: | Size: 661 B |
|
After Width: | Height: | Size: 677 B |
|
After Width: | Height: | Size: 794 B |
|
After Width: | Height: | Size: 780 B |
|
After Width: | Height: | Size: 727 B |
|
After Width: | Height: | Size: 694 B |
|
After Width: | Height: | Size: 818 B |
|
After Width: | Height: | Size: 892 B |
|
After Width: | Height: | Size: 700 B |
|
After Width: | Height: | Size: 540 B |
|
After Width: | Height: | Size: 725 B |
|
After Width: | Height: | Size: 879 B |
|
After Width: | Height: | Size: 797 B |
|
After Width: | Height: | Size: 728 B |
|
After Width: | Height: | Size: 573 B |
|
After Width: | Height: | Size: 592 B |
|
After Width: | Height: | Size: 587 B |
|
After Width: | Height: | Size: 540 B |
|
After Width: | Height: | Size: 560 B |
|
After Width: | Height: | Size: 4.3 KiB |
|
After Width: | Height: | Size: 3.5 KiB |
|
After Width: | Height: | Size: 1.9 KiB |
|
After Width: | Height: | Size: 968 B |
|
After Width: | Height: | Size: 2.7 KiB |
|
After Width: | Height: | Size: 3.2 KiB |
|
After Width: | Height: | Size: 695 B |
|
After Width: | Height: | Size: 2.2 KiB |
|
After Width: | Height: | Size: 1.7 KiB |
|
After Width: | Height: | Size: 1.2 KiB |
|
After Width: | Height: | Size: 1.9 KiB |
|
After Width: | Height: | Size: 2.2 KiB |
|
After Width: | Height: | Size: 674 B |
|
After Width: | Height: | Size: 759 B |
|
After Width: | Height: | Size: 660 B |
|
After Width: | Height: | Size: 630 B |
|
After Width: | Height: | Size: 774 B |
|
After Width: | Height: | Size: 639 B |
|
After Width: | Height: | Size: 799 B |
|
After Width: | Height: | Size: 739 B |
|
After Width: | Height: | Size: 718 B |
|
After Width: | Height: | Size: 770 B |
|
After Width: | Height: | Size: 780 B |
|
After Width: | Height: | Size: 617 B |
|
After Width: | Height: | Size: 930 B |
|
After Width: | Height: | Size: 915 B |
|
After Width: | Height: | Size: 689 B |
|
After Width: | Height: | Size: 995 B |
|
After Width: | Height: | Size: 951 B |
|
After Width: | Height: | Size: 784 B |
|
After Width: | Height: | Size: 959 B |
|
After Width: | Height: | Size: 882 B |
|
After Width: | Height: | Size: 860 B |
|
After Width: | Height: | Size: 699 B |
|
After Width: | Height: | Size: 685 B |
|
After Width: | Height: | Size: 710 B |
|
After Width: | Height: | Size: 498 B |
|
After Width: | Height: | Size: 921 B |
|
After Width: | Height: | Size: 710 B |
|
After Width: | Height: | Size: 1.1 KiB |
|
After Width: | Height: | Size: 799 B |
|
After Width: | Height: | Size: 664 B |
|
After Width: | Height: | Size: 791 B |
|
After Width: | Height: | Size: 946 B |
|
After Width: | Height: | Size: 584 B |
|
After Width: | Height: | Size: 747 B |
|
After Width: | Height: | Size: 881 B |
|
After Width: | Height: | Size: 750 B |
|
After Width: | Height: | Size: 894 B |
|
After Width: | Height: | Size: 799 B |
|
After Width: | Height: | Size: 642 B |
|
After Width: | Height: | Size: 741 B |
|
After Width: | Height: | Size: 819 B |
|
After Width: | Height: | Size: 707 B |
|
After Width: | Height: | Size: 869 B |
|
After Width: | Height: | Size: 840 B |
|
After Width: | Height: | Size: 822 B |
|
After Width: | Height: | Size: 748 B |
|
After Width: | Height: | Size: 669 B |
|
After Width: | Height: | Size: 851 B |
|
After Width: | Height: | Size: 790 B |
|
After Width: | Height: | Size: 793 B |
|
After Width: | Height: | Size: 536 B |
|
After Width: | Height: | Size: 544 B |
|
After Width: | Height: | Size: 1.0 KiB |
|
After Width: | Height: | Size: 901 B |
|
After Width: | Height: | Size: 807 B |
|
After Width: | Height: | Size: 718 B |
|
After Width: | Height: | Size: 597 B |
|
After Width: | Height: | Size: 614 B |
|
After Width: | Height: | Size: 639 B |
|
After Width: | Height: | Size: 554 B |
|
After Width: | Height: | Size: 712 B |
|
After Width: | Height: | Size: 721 B |
|
After Width: | Height: | Size: 431 B |