Skip to content
Snippets Groups Projects
Commit 6c8af6ff authored by Sun Jin Kim's avatar Sun Jin Kim
Browse files

new docs

parent 378cd72c
No related branches found
No related tags found
No related merge requests found
Pipeline #272687 failed
...@@ -367,8 +367,6 @@ class AaLearner: ...@@ -367,8 +367,6 @@ class AaLearner:
accuracy (float): best accuracy reached in any accuracy (float): best accuracy reached in any
""" """
# we create an instance of the child network that we're going # we create an instance of the child network that we're going
# to train. The method of creation depends on the type of # to train. The method of creation depends on the type of
# input we got for child_network_architecture # input we got for child_network_architecture
......
:mod:`autoaug.autoaugment_learners`.AaLearner
==============================================
.. currentmodule:: autoaug.autoaugment_learners
.. autoclass:: AaLearner
:members:
\ No newline at end of file
:mod:`autoaug.autoaugment_learners`.EvoLearner
==============================================
.. currentmodule:: autoaug.autoaugment_learners
.. autoclass:: EvoLearner
:members:
\ No newline at end of file
:mod:`autoaug.autoaugment_learners`.GruLearner
==============================================
.. currentmodule:: autoaug.autoaugment_learners
.. autoclass:: GruLearner
:members:
\ No newline at end of file
:mod:`autoaug.autoaugment_learners`.RsLearner
==============================================
.. currentmodule:: autoaug.autoaugment_learners
.. autoclass:: RsLearner
:members:
\ No newline at end of file
:mod:`autoaug.autoaugment_learners`.UcbLearner
==============================================
.. currentmodule:: autoaug.autoaugment_learners
.. autoclass:: UcbLearner
:members:
\ No newline at end of file
AutoAugment learners
--------------------
.. toctree::
:maxdepth: 3
:caption: autoaugment_learners
aa_learners/autoaug.autoaugment_learners.AaLearner
aa_learners/autoaug.autoaugment_learners.EvoLearner
aa_learners/autoaug.autoaugment_learners.GruLearner
aa_learners/autoaug.autoaugment_learners.RsLearner
aa_learners/autoaug.autoaugment_learners.UcbLearner
\ No newline at end of file
...@@ -5,32 +5,14 @@ ...@@ -5,32 +5,14 @@
.. toctree:: .. toctree::
:maxdepth: 4 :maxdepth: 4
:caption: Usage (developers might find this useful): :caption: How-to Guides:
usage/tutorial_for_team howto/howto_main
usage/autoaugment_helperclass
.. toctree:: .. toctree::
:maxdepth: 4 :maxdepth: 4
:caption: API Reference: :caption: Explanations:
autoaug/autoaugment_learners explanations/autoaugment_learners
autoaug/aa_learners/autoaug.autoaugment_learners.AaLearner
autoaug/aa_learners/autoaug.autoaugment_learners.EvoLearner
autoaug/aa_learners/autoaug.autoaugment_learners.GruLearner
autoaug/aa_learners/autoaug.autoaugment_learners.RsLearner
autoaug/aa_learners/autoaug.autoaugment_learners.UcbLearner
..
I've commented this out for now
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
AutoAugment object
------------------
######################################################################################################
How to use a ``AutoAugment`` object to apply AutoAugment policies to ``Datasets`` objects
######################################################################################################
This is a page dedicated to demonstrating functionalities of :class:`AutoAugment`, which
we use as a helper class to help us apply AutoAugment policies to datasets.
This is a tutorial (in the sense describe in https://documentation.divio.com/structure/).
For an example of how the material is used in our library, see the source code of
:meth:`AaLearner._test_autoaugment_policy <autoaug.autoaugment_learners.AaLearner>`.
Let's say we have a policy within the search space specified by the original
AutoAugment paper:
.. code-block::
my_policy = [
(("Invert", 0.8, None), ("Contrast", 0.2, 6)),
(("Rotate", 0.7, 2), ("Invert", 0.8, None)),
(("Sharpness", 0.8, 1), ("Sharpness", 0.9, 3)),
(("ShearY", 0.5, 8), ("Invert", 0.7, None)),
(("AutoContrast", 0.5, None), ("Equalize", 0.9, None))
]
And that we also have a dataset that we want to apply this policy to:
.. code-block::
train_dataset = datasets.MNIST(root='./datasets/mnist/train', train=True)
test_dataset = datasets.MNIST(root='./datasets/mnist/test', train=False,
transform=torchvision.transforms.ToTensor())
The ``train_dataset`` object will have an attribute ``.transform`` with the
default value ``None``.
The ``.transform`` attribute takes a function which takes an image as an input
and returns a transformed image.
We need a function which will apply the ``my_policy`` and we use
an ``AutoAugment`` for this job.
.. code-block::
:caption: Creating an ``AutoAugment`` object and imbueing it with ``my_policy``.
aa_transform = AutoAugment()
aa_transform.subpolicies = my_policy
train_transform = transforms.Compose([
aa_transform,
transforms.ToTensor()
])
We can use ``train_transform`` as an image function:
.. code-block::
:caption: This function call will return an augmented image
augmented_image = train_transform(original_image)
We usually apply an image function to a ``Dataset`` like this:
.. code-block::
train_dataset = datasets.MNIST(root='./datasets/mnist/train', train=True, transform=my_function)
However, in our library we often have to apply a image function *after* the ``Dataset``
object was already created. (For example, a ``Dataset`` object is created and trained on
multiple times using different policies).
In this case, we alter the ``.transform`` attribute:
.. code-block::
train_dataset.transform = train_transform
Now if we can create a ``DataLoader`` object from ``train_dataset``, it will automatically
apply ``my_policy``.
\ No newline at end of file
AaLearner object and its children
------------------------------------------------------------------------------------------------
This is a page dedicated to demonstrating functionalities of :class:`AaLearner`.
This is a how-to guide (in the sense describe in https://documentation.divio.com/structure/).
######################################################################################################
How to use the ``AaLearner`` class to find an optimal policy for a dataset-child_network pair
######################################################################################################
This section can also be read as a ``.py`` file in ``./tutorials/how_use_aalearner.py``.
.. code-block::
import autoaug.autoaugment_learners as aal
import autoaug.child_networks as cn
import torchvision.datasets as datasets
import torchvision
Defining the problem setting:
.. code-block::
train_dataset = datasets.MNIST(root='./autoaug/datasets/mnist/train',
train=True, download=True, transform=None)
test_dataset = datasets.MNIST(root='./autoaug/datasets/mnist/test',
train=False, download=True, transform=torchvision.transforms.ToTensor())
child_network_architecture = cn.lenet
.. warning::
In earlier versions, we had to write ``child_network_architecture=cn.LeNet``
and not ``child_network_architecture=cn.LeNet()``. But now we can do both.
Both types of objects can be input into ``AaLearner.learn()``.
More precisely, the ``child_network_architecture`` parameter has to be either
as ``nn.Module``, a ``function`` which returns a ``nn.Module``, or a ``type``
which inherits ``nn.Module``.
A downside (or maybe the upside??) of doing one of the latter two is that
the same randomly initialized weights are used for every policy.
Using the random search learner to evaluate randomly generated policies: (You
can use any other learner in place of random search learner as well)
.. code-block::
# aa_agent = aal.GruLearner()
# aa_agent = aal.EvoLearner()
# aa_agent = aal.UcbLearner()
# aa_agent = aal.ac_learner()
aa_agent = aal.RsLearner(
sp_num=7,
toy_size=0.01,
batch_size=4,
learning_rate=0.05,
max_epochs=float('inf'),
early_stop_num=35,
)
aa_agent.learn(train_dataset,
test_dataset,
child_network_architecture=child_network_architecture,
iterations=15000)
You can set further hyperparameters when defining a AaLearner.
Also, depending on what learner you are using, there might be unique hyperparameters.
For example, in the GRU learner you can tune the exploration parameter ``alpha``.
Viewing the results:
``.history`` is a list containing all the policies tested and the respective
accuracies obtained when trained using them.
.. code-block::
print(aa_agent.history)
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment