TERMO DE COOPERAÇÃO TÉCNICA E FINANCEIRA QUE ENTRE SI CELEBRAM O INSTITUTO SERRAPILHEIRA, A FUNDAÇÃO UNIVERSIDADE FEDERAL DE SÃO CARLOS, A FUNDAÇÃO ARTHUR BERNARDES, E RICARDO CERRI, NA FORMA ABAIXO:
TERMO DE COOPERAÇÃO TÉCNICA E FINANCEIRA QUE ENTRE SI CELEBRAM O INSTITUTO SERRAPILHEIRA, A FUNDAÇÃO UNIVERSIDADE FEDERAL DE SÃO CARLOS, A FUNDAÇÃO XXXXXX XXXXXXXXX, E XXXXXXX XXXXX, NA FORMA ABAIXO:
INSTITUTO SERRAPILHEIRA, associação sem fins lucrativos, com sede na Xxx Xxxxxx xx Xxxxxxxx, 000, 0x xxxxx, xxxx 0, Xxxxxxx, na cidade do Rio de Janeiro, Estado do RJ, inscrito no CNPJ sob o nº 23.827.151/0001-13, neste ato regularmente representado por seu Diretor- Presidente Xxxx Xxxxxxx Xxxxx Xxxxxxxxx, francês, geneticista, RNE V889563-3, inscrito no CPF sob o nº 000.000.000-00 e por seu procurador Xxxxxx Xxxx Xxxxx xx Xxxxxx et d' Audenhove, brasileiro, casado, engenheiro, portador do RG nº 6.793.393-7 SSP/RJ e CPF/MF sob o nº 000.000.000-00, ambos residente e domiciliado no Estado do Rio de Janeiro, na Cidade do Rio de Janeiro, doravante denominado isoladamente INSTITUTO;
FUNDAÇÃO UNIVERSIDADE FEDERAL DE SÃO CARLOS, pessoa jurídica de
direito público, com sede na Xxx. Xxxxxxxxxx Xxxx xx 000 - XX-000 – Xxxxxxxxxx - Xxx Xxxxxx - Xxx Xxxxx – CEP: 13565-905, inscrita no CNPJ sob o nº 45.358.058/0001-40, neste ato representada por Xxxxx Xxxxxxxxx Xxxxxxx Xxxxxxxx, RG 7.607.024-4 e CPF sob nº 000.000.000-00, doravante denominada isoladamente INSTITUIÇÃO DE PESQUISA ou, simplesmente INSTITUIÇÃO;
FUNDAÇÃO XXXXXX XXXXXXXXX, fundação de direito privado, com personalidade jurídica própria, sem fins lucrativos, com sede na cidade de Viçosa, no Campus Universitário, inscrita no CNPJ sob o nº 20.320.503/0001-51, neste ato representada por seu Diretor- Presidente Xxxxxxx Xxxx, professor universitário, RG M4623812 SSP/MG e CPF sob nº 644.357.686- 15, doravante denominada isoladamente FUNDAÇÃO ou, simplesmente FUNARBE, e
XXXXXXX XXXXX, brasileiro, professor universitário, RG 32.240.599-3 e CPF sob nº 000.000.000-00, doravante designado isoladamente COORDENADOR (A) e, quando em conjunto com o INSTITUTO, a INSTITUIÇÃO DE PESQUISA e a FUNDAÇÃO, denominados, denominados Partícipes, (PARTÍCIPES).
Considerando que, em função do objetivo do INSTITUTO de fomentar o desenvolvimento científico e tecnológico no país, o (a) COORDENADOR (A) foi escolhido (a), em 29 de maio de 2020, como um dos 23 (vinte e três) cientistas selecionados a receber apoio para a Chamada 2019, de Pesquisa Científica do INSTITUTO.
Considerando que, no âmbito do referido processo de seleção, foi selecionado o projeto “Evolução Automática de Redes Neurais Profundas”, apresentado pelo (a) pesquisador (a) /cientista Xxxxxxx Xxxxx (“PROJETO”);
Considerando que o PROJETO está sendo desenvolvido junto a Fundação Universidade Federal de São Carlos, INSTITUIÇÃO à qual o (a) pesquisador (a) /cientista está vinculado.
Considerando que a FUNARBE é uma fundação de apoio que auxilia a instituição de pesquisa e outras entidades na execução dos seus projetos de pesquisa;
Considerando que a Lei nº 10.973 de 2004 (“Lei de Inovação”) dispõe sobre incentivos à inovação e à pesquisa científica e tecnológica no ambiente produtivo, e que a Lei nº
13.243 de 2016 (“Marco Legal de Ciência e Tecnologia”) promoveu considerável modificação e modernização legislativa com a finalidade de estimular o desenvolvimento científico, a pesquisa, a inovação;
RESOLVEM, os PARTÍCIPES, firmar o presente TERMO DE COOPERAÇÃO
(“TERMO DE COOPERAÇÃO”), com fulcro na Lei nº 10.973 de 2004 (“Lei de Inovação”) e c/c a Lei nº
13.243 de 2016 (“Marco Legal de Ciência e Tecnologia”), mediante as seguintes cláusulas e condições:
CLÁUSULA PRIMEIRA - DO OBJETO
1.1 Constitui objeto do TERMO DE COOPERAÇÃO, a união de esforços dos
PARTÍCIPES para o desenvolvimento do PROJETO, sob supervisão do (a) COORDENADOR (A).
1.2 O PROJETO objeto do TERMO DE COOPERAÇÃO deverá ser executado em conformidade com as descrições constantes dos documentos denominados “Plano de Trabalho”, os quais passam a integrar o presente instrumento, independentemente de transcrição, sob a forma de Anexo I, ficando desde já estabelecido que eventuais alterações das atividades descritas no Anexo I, em função da evolução natural das pesquisas conduzidas no âmbito do PROJETO, poderão ser incorporadas ao objeto do TERMO DE COOPERAÇÃO mediante simples comunicação no relatório final, ou, no caso de alterações relevantes, imediatamente por e-mail, feita pelo (a) COORDENADOR
(A) ao INSTITUTO, e que passarão a integrar o Anexo I para todos os efeitos deste Termo.
1.3 A INSTITUIÇÃO, desde já, nomeia como coordenador (a) geral do
TERMO DE COOPERAÇÃO, o (a) Professor (a) /Pesquisador (a) Xxxxxxx Xxxxx.
CLÁUSULA SEGUNDA -DA VIGÊNCIA
2.1 A vigência prevista do TERMO DE COOPERAÇÃO é até 14 de julho de 2021, podendo ser prorrogado, de ofício, por igual período ou frações, mediante pedido acompanhado de justificativa circunstanciada e aceitação mútua de todos os PARTÍCIPES.
2.2 Não obstante o disposto na cláusula 2.1., concordam os PARTÍCIPES que poderão ser abrangidos pelo TERMO DE COOPERAÇÃO o reembolso de valores incorridos com o PROJETO a partir de 12 de junho de 2020.
CLÁUSULA TERCEIRA - DOS RECURSOS
3.1 Os recursos a serem alocados pelo INSTITUTO para financiamento do TERMO DE COOPERAÇÃO serão de R$ 66.700,00 (sessenta e seis mil e setecentos Reais), podendo ser alterados por termo aditivo, previamente aprovado de comum acordo por todos os PARTÍCIPES.
3.2 Os recursos a serem transferidos pelo INSTITUTO à FUNARBE, mencionados nesta Cláusula, se destinam à execução do PROJETO, devendo ser exclusivamente aplicados nas atividades descritas no Anexo I.
3.3. Os PARTÍCIPES reconhecem que o INSTITUTO observará sempre os limites do seu orçamento anual aprovado, na execução de suas obrigações relativas ao TERMO DE COOPERAÇÃO, notadamente em relação a disponibilidade dos recursos.
3.3.1 O INSTITUTO obedecerá a seu cronograma de orçamento para creditar os repasses dos recursos dos projetos, sendo vedado repasses únicos de valores integrais orçados dentro do primeiro ano calendário do Projeto e devendo o valor total dos repasses ser efetivado ao longo da vigência do TERMO DE COOPERAÇÃO.
3.4 Devem ainda ser deduzidos do valor dos recursos mencionado na cláusula 3.1 (i) a remuneração da FUNARBE, equivalente a 8% (oito por cento) do referido valor, e (ii) a remuneração devida à INSTITUIÇÃO, fixada entre 2% e 5% do referido valor.
3.5 Os recursos serão repassados através de depósito bancário em conta corrente de titularidade da FUNARBE, aberta especificamente para este fim e identificada por correspondência escrita encaminhada ao INSTITUTO.
3.6 Na hipótese de os recursos disponibilizados pelo INSTITUTO na forma da cláusula 3.1, não terem sido total ou parcialmente utilizados no PROJETO até a data do vencimento do TERMO DE COOPERAÇÃO, poderá o INSTITUTO, a seu exclusivo critério, solicitar a respectiva devolução, parcial ou totalmente.
3.7 Os PARTÍCIPES expressamente acordam que o INSTITUTO somente terá obrigação de contribuir, para o objeto do TERMO DE COOPERAÇÃO, com o valor referido na cláusula 3.1, de forma que quaisquer recursos adicionais necessários à execução do PROJETO deverão ser providenciados exclusivamente pelas demais partes, às suas expensas.
CLÁUSULA QUARTA – DAS OBRIGAÇÕES DOS PARTÍCIPES
Consistem em obrigações das Partes:
I – DO INSTITUTO:
a) Transferir os recursos financeiros, conforme estabelecido no TERMO DE
COOPERAÇÃO;
b) Acompanhar a execução das ações previstas no TERMO DE COOPERAÇÃO através de: (i) relatórios técnico-científicos e (ii) relatórios de execução financeira, a serem apresentados pelo (a) COORDENADOR (A) e pela FUNDAÇÃO, observado o disposto no item 5.1 abaixo;
c) Custear despesas para realização de eventuais reuniões de acompanhamento do PROJETO que venha a solicitar.
II – DA INSTITUIÇÃO
a) Nomear o (a) COORDENADOR (A) como responsável por coordenar e acompanhar o PROJETO, conforme descrito no plano de trabalho constante do Anexo I;
b) Abster-se de determinar ou permitir que o (a) COORDENADOR (A) seja afastado da coordenação do PROJETO, e
c) Prover toda a infraestrutura e apoio técnicos necessários à execução dos trabalhos objeto do plano de trabalho constante do Anexo I, mormente espaço físico, equipamentos, máquinas, implementos, insumos e demais recursos técnicos e administrativos.
III – DA FUNDAÇÃO (INSTITUIÇÃO gestora)
a) Exercer a gestão dos recursos previstos na cláusula terceira, aplicando-os exclusivamente para o cumprimento das finalidades do TERMO DE COOPERAÇÃO;
b) Manter os recursos repassados, em conta bancária específica, aberta exclusivamente para execução das ações do TERMO DE COOPERAÇÃO, obrigando-se a aplicar os recursos não utilizados em conformidade com os § 4º e 5º do Art. 116 da Lei Federal nº 8.666/93 cujo rendimento da aplicação financeira será revertido para a execução do PROJETO;
c) Permitir aos coordenadores do PROJETO acesso, a qualquer momento, às informações da conta bancária, bem como acesso aos extratos e movimentações financeiras;
d) Observar, na gestão dos recursos recebidos, os princípios da legalidade, impessoalidade, moralidade, publicidade, economicidade e eficiência, além das regras de desembolso previstas no TERMO DE COOPERAÇÃO;
e) Permitir o acompanhamento de suas atividades em relação ao objeto do
TERMO DE COOPERAÇÃO por parte do INSTITUTO ou da INSTITUIÇÃO;
f) Manter arquivo com documentação comprobatória das despesas realizadas em virtude do TERMO DE COOPERAÇÃO, disponibilizando-as para consulta dos integrantes, a qualquer tempo, inclusive para análise técnica financeira;
g) Registrar, em sua contabilidade específica do PROJETO, os atos e fatos administrativos referentes à gestão dos recursos alocados por força do TERMO DE COOPERAÇÃO;
IV – DO (A) COORDENADOR (A)
a) Executar, coordenar e acompanhar as ações previstas no plano de trabalho constante do Anexo I;
b) Responsabilizar-se pela utilização dos recursos financeiros disponibilizados pelo INSTITUTO exclusivamente no âmbito do PROJETO, obrigando-se a devolver os valores que venham a ser aplicados em despesas estranhas ao objeto do TERMO DE COOPERAÇÃO;
c) Estar disponível para participar de reuniões técnico-científicas para apresentação dos resultados parciais ou finais do PROJETO;
d) Participar do processo de avaliação de impacto do PROJETO mediante solicitação do INSTITUTO;
e) Participar das iniciativas da “Comunidade Serrapilheira” e de “Divulgação Científica” promovidas pelo INSTITUTO;
f) Ao firmar o TERMO DE COOPERAÇÃO o (a) COORDENADOR (A) indica estar de acordo com a relevância da política de incentivo à diversidade e de boas práticas em ciência aberta e reprodutível do INSTITUTO (Guias disponível no link: xxxxx://xxxxxxxxxxxxx.xxx/xxxxxx-xxxxxxx/, e se propõe, neste ato, a incorporar no desenvolvimento regular de suas atividades e na integral consecução do PROJETO, os princípios do “Guia de Boas Práticas em Diversidade na Ciência” e do “Guia de Boas Práticas em Ciência Aberta e Reprodutível” do INSTITUTO.
CLÁUSULA QUINTA – DOS RELATÓRIOS DE ACOMPANHAMENTO
5.1 Visando permitir ao INSTITUTO acompanhar a integralidade da execução das ações previstas no TERMO DE COOPERAÇÃO, caberá ao (a) COORDENADOR (A) emitir um relatório técnico-científico após 6 meses da assinatura do TERMO DE COOPERAÇÃO e outro ao final da vigência do TERMO DE COOPERAÇÃO, bem como caberá à FUNARBE emitir um relatório financeiro da utilização dos recursos até dia 31 de dezembro do ano de assinatura do TERMO DE COOMPERAÇÃO e outro ao final da vigência do TERMO DE COOPERAÇÃO.
CLÁUSULA SEXTA – DA PUBLICAÇÃO CIENTÍFICA
6.1 O INSTITUTO estimula que os dados brutos e resultados obtidos no âmbito do PROJETO ao longo do período de vigência do TERMO DE COOPERAÇÃO, incluindo, mas não se limitando a artigos revisados por pares, monografias e códigos de programação, sejam publicados, pela INSTITUIÇÃO e pelo (a) COORDENADOR (A), em repositórios de acesso público. O
(A) COORDENADOR (A) poderá, ainda, publicar seus trabalhos a respeito do PROJETO em periódicos de acesso gratuito.
6.2 Em quaisquer publicações científicas de divulgação dos resultados do PROJETO, por qualquer meio, feitos pelo (a) COORDENADOR (A), pela INSTITUIÇÃO ou pela FUNDAÇÃO, deverá ser inserida a frase “Este trabalho recebeu apoio do Instituto Serrapilheira (número do processo Serra – 1912-31676)” ou “This work was supported by the Serrapilheira Institute (grant number Serra – 1912-31676)”.
6.3 Eventuais custos incorridos com a publicação dos resultados gerados a partir do projeto, objeto desta cláusula sexta, deverão ser arcados pela INSTITUIÇÃO ou pelo (a) COORDENADOR (A), podendo, se necessário, ser deduzido do valor total referido na cláusula 3.1.
CLÁUSULA SÉTIMA – DA DIVULGAÇÃO
7.1 Poderão, ainda, os PARTÍCIPES, divulgarem o apoio do INSTITUTO para a execução do PROJETO, em palestras, seminários e cursos ministrados e/ou organizados pelas mesmas desde que relativos ao PROJETO, ou quando da divulgação de qualquer produto resultado do PROJETO, que venha a se concretizar através de folders, banners, cartazes, quadros, folhetos, entre outros, o que deverá ser aprovado previamente pelo INSTITUTO.
7.1.1 Qualquer divulgação, de qualquer natureza, que envolva o nome do
INSTITUTO SERRAPILHEIRA deverá ser prévia e formalmente aprovada pelo INSTITUTO.
7.1.2 Se houver divulgação de qualquer natureza que envolva a
INSTITUIÇÃO, a mesma deverá ser consultada.
7.2 É vedada qualquer forma de promoção pessoal, observado o art. 37,
§1º, da Constituição Federal de 1988.
XXXXXXXX XXXXXX – DA PROPRIEDADE INTELECTUAL
8.1 Qualquer invento, aperfeiçoamento ou inovação, obtenção de processos ou produtos, privilegiáveis ou não, gerados em decorrência do TERMO DE COOPERAÇÃO serão de titularidade da INSTITUIÇÃO e/ou do (a) COORDENADOR (A), conforme o caso, não cabendo quaisquer destes direitos ao INSTITUTO ou à FUNDAÇÃO.
8.2 As despesas das proteções de propriedade intelectual, os encargos periódicos de manutenção destas proteções, bem como quaisquer encargos administrativos e judiciais no âmbito nacional e internacional serão absorvidos pela INSTITUIÇÃO, na forma da sua regulamentação aplicável.
8.3 É garantido à INSTITUIÇÃO e/ou ao (a) COORDENADOR (A) o direito de uso dos resultados, para fins de pesquisa, sem que caiba qualquer remuneração ao INSTITUTO ou à FUNDAÇÃO.
8.4 Fica assegurado à INSTITUIÇÃO e/ou ao (à) COORDENADOR (A) o direito de exploração e licenciamento para terceiros interessados, das tecnologias desenvolvidas durante a vigência do TERMO DE COOPERAÇÃO.
CLÁUSULA NONA – DA AUTORIZAÇÃO DE USO DE IMAGEM E VOZ
9.1 O (A) COORDENADOR (A) autoriza, em caráter definitivo e gratuito e para todos os fins em direito admitidos, a utilização de sua imagem e voz, registrada e constante em todo e qualquer material produzido pelo INSTITUTO, incluindo, mas não se limitando a eventos realizados para quaisquer fins de interesse do INSTITUTO.
9.2 O material referido na cláusula 9.1 poderá ser exibido e reproduzido, sem limitação, em quaisquer publicações e divulgações, em território nacional ou no exterior, sob qualquer forma, em qualquer tipo de mídia, incluindo mas não se limitando a redes sociais, site do INSTITUTO ou de terceiros por ele autorizados, folhetos em geral (encartes, mala direta, catálogo, etc.), folders de apresentação, anúncios em revistas, jornais e meios de comunicação em geral, bem como disponibilizado no banco de imagens resultante de evento do INSTITUTO, podendo também o INSTITUTO executar a edição e montagem das fotos e filmagens, conduzindo as reproduções que entender necessárias.
9.3 A autorização tratada nesta cláusula nona é concedida a título exclusivamente gratuito, ficando ainda autorizada, para os mesmos fins, a cessão dos direitos de uso, reprodução e veiculação das imagens e voz captados, para terceiros, não sendo devido qualquer tipo de remuneração ao (a) COORDENADOR (A) em decorrência do uso, reprodução, veiculação, ou cessão autorizados pelo INSTITUTO ou por terceiros cessionários, declarando ainda o (a) COORDENADOR (A) ser esta a expressão de sua vontade, nada tendo a reclamar a título de direitos conexos a sua imagem e voz.
CLÁUSULA DÉCIMA – DA CONFIDENCIALIDADE
10.1 Os PARTÍCIPES, por si, seus representantes, administradores, assessores empregados e prestadores de serviços obrigam-se a manter o TERMO DE COOPERAÇÃO e seus anexos, bem assim, suas condições, além das informações entre si trocadas para sua celebração, estritamente confidenciais, obrigando-se a não utilizá-las, exceto para o fim de possibilitar a execução do mesmo ou na medida em que (i) venha a ser obrigada por decisão judicial ou por obrigação legal, previamente informada às outras partes contratantes, ou (ii) a informação já seja de conhecimento público. “Informação Confidencial” significa toda e qualquer informação em qualquer forma que seja divulgada, incluindo, mas sem limitação, às informações financeiras referentes ao custo dos serviços que forem disponibilizadas por uma parte à outra; ou que tenham sido identificadas como confidencial, sejam de propriedade da parte reveladora ou de terceiros, ou que tenham sido obtidas pela parte receptora mediante visita a qualquer instalação, estabelecimento ou escritório da parte reveladora, seja anterior ou posteriormente a celebração do TERMO DE COOPERAÇÃO.
10.2 É vedado aos PARTÍCIPES utilizar, publicar, divulgar ou de outra forma mencionar em qualquer publicidade, promoção de serviços ou a qualquer outro título ou pretexto, a quaisquer terceiros, os termos e as condições do TERMO DE COOPERAÇÃO, sem a prévia autorização por escrito de qualquer outro PARTÍCIPE. Adicionalmente, é terminantemente vedada a utilização de marcas e logomarcas de qualquer dos PARTÍCIPES, sem prévia autorização por escrito dos mesmos.
10.3 Qualquer PARTÍCIPE que venha a dar causa ou de qualquer modo tome conhecimento de qualquer violação do disposto nesta Cláusula Décima deverá, imediatamente, comunicar tal fato à todos os PARTÍCPES para que estes possam, se desejarem, tomar as medidas cabíveis para a proteção de seus respectivos direitos.
10.4 As previsões de Confidencialidade e Sigilo aqui previstas deverão perdurar por toda a duração do TERMO DE COOPERAÇÃO e por prazo indeterminado após a conclusão do prazo original deste, independentemente de rescisão antecipada, imotivada ou não, do TERMO DE COOPERAÇÃO.
CLÁUSULA DÉCIMA PRIMEIRA – DO COMPARTILHAMENTO DE INFORMAÇÕES/DADOS
11.1 Os PARTÍCIPES concordam que em observância aos termos da legislação nacional de proteção de dados (LGPD - Lei Geral de Proteção de Dados) e na execução das disposições do TERMO DE COOPERAÇÃO, o INSTITUTO poderá compartilhar informações com provedores de serviços, desde que mediante compromisso de confidencialidade, ou com terceiros quando necessário para cumprir exigências legais ou regulatórias.
11.2 O INSTITUTO poderá ainda, compartilhar dados não identificados ou agregados com quaisquer terceiros, inclusive para fins de pesquisa e análise.
11.3 Para efeito do TERMO DE COOPERAÇÃO, (i) dados não identificados são dados que não estão vinculados ou razoavelmente vinculáveis a uma pessoa ou dispositivo específico e (ii) dados agregados são os dados coletados que foram combinados com informações de terceiros, para que o destinatário dos dados não consiga identificar nenhuma pessoa ou dispositivo específico a partir dos dados.
11.4 O INSTITUTO adota políticas e medidas de segurança da informação e proteção de dados adequadas às suas atividades e às informações e dados de terceiros por ele recebidos. Não obstante, os PARTÍCIPES concordam que o INSTITUTO não é responsável por eventual roubo, destruição ou divulgação inadvertida de informações recebidas ou transmitidas virtualmente e on line em razão do TERMO DE COOPERAÇÃO.
CLÁUSULA DÉCIMA SEGUNDA – DAS OBRIGAÇÕES DE CONFORMIDADE E
ANTICORRUPÇÃO
12.1 A INSTITUIÇÃO, bem como seus sócios, representantes legais, diretores, Agentes ou qualquer pessoa agindo em nome da INSTITUIÇÃO ou das pessoas anteriormente especificadas, bem como o (a) COORDENADOR (A), não podem:
(a) ter utilizado ou utilizar recursos para o pagamento de contribuições, presentes ou atividades de entretenimento ou qualquer outra despesa ilegal relativa à atividade política;
(b) ter realizado ou realizar ação destinada a facilitar uma oferta, pagamento ou promessa ilegal de pagar, bem como ter aprovado ou aprovar o pagamento, a doação de dinheiro, propriedade, presente ou qualquer outro bem de valor, direta ou indiretamente, para qualquer “oficial do governo” (incluindo qualquer oficial ou funcionário de um governo ou de entidade de propriedade ou controlada por um governo ou organização pública internacional ou qualquer pessoa agindo na função de representante do governo ou candidato de partido político) a fim de influenciar qualquer ação política ou obter uma vantagem indevida com violação da lei aplicável;
(c) ter realizado ou realizar qualquer pagamento ou tomar qualquer ação que viole qualquer lei aplicável; ou
(d) ter realizado ou realizar um ato de corrupção, pago valor ilegal, bem como influenciado o pagamento de qualquer valor indevido.
12.2 A INSTITUIÇÃO deve conduzir seus negócios em conformidade com a legislação aplicável às quais ela está sujeita, especialmente a legislação anticorrupção, bem como ter instituído, mantido e continuar a manter políticas e procedimentos elaborados para garantir a contínua conformidade com referidas normas ("Obrigações de Conformidade").
12.3 A INSTITUIÇÃO deverá informar imediatamente, por escrito, ao INSTITUTO, detalhes de qualquer violação relativa às Obrigações de Conformidade que eventualmente venha a ocorrer. Esta é uma obrigação permanente e deverá perdurar até o término do TERMO DE COOPERAÇÃO.
12.4 A INSTITUIÇÃO deve: (a) sempre cumprir estritamente as Obrigações de Conformidade; (b) monitorar seus colaboradores, agentes e pessoas ou entidades que estejam agindo por sua conta ou em nome do INSTITUTO para garantir o cumprimento das Obrigações de Conformidade; e (c) deixar claro em todas as suas transações em nome do INSTITUTO que o INSTITUTO exige cumprimento às Obrigações de Conformidade.
12.5 Ao firmar o TERMO DE COOPERAÇÃO, a INSTITUIÇÃO, bem como a FUNDAÇÃO e o (a) COORDENADOR (A) declaram conhecer e obrigam-se a observar, no que lhes couber, os termos e condições do Código de Ética e Conduta do INSTITUTO, o qual se encontra disponível a todos, em sua versão atualizada, no site do INSTITUTO.
CLÁUSULA DÉCIMA TERCEIRA – DO PESSOAL
13.1 O pessoal alocado individualmente pelos PARTÍCIPES para a execução do TERMO DE COOPERAÇÃO, seja na condição de empregado, autônomo, empreiteiro ou a qualquer outro título, nenhuma vinculação ou direito terá em relação aos demais PARTÍCIPES contratantes, ficando a cargo exclusivo de cada PARTÍCIPE a integral responsabilidade, no que lhe couber, quanto aos deveres e direitos relativos ao pessoal por ele alocado, mormente os direitos trabalhistas e previdenciários, inexistindo, portanto, qualquer tipo de solidariedade ou vínculo de qualquer espécie entre os PARTÍCIPES em razão dessas atividades ou obrigações.
CLÁUSULA DÉCIMA QUARTA - DA PROPRIEDADE DOS BENS ADQUIRIDOS COM RECURSOS DO INSTITUTO
14.1 Os bens materiais adquiridos, construídos e produzidos, conforme definido no Plano de Trabalho e com recursos financeiros aportados pelo INSTITUTO para execução do objeto do TERMO DE COOPERAÇÃO, serão de propriedade da INSTITUIÇÃO.
CLÁUSULA DÉCIMA QUINTA - DO IMPOSTO SOBRE TRANSMISSÃO CAUSA
MORTIS E DOAÇÃO
15.1 Quando do crédito dos recursos previstos no item 3.1, caberá ao INSTITUTO, por conta e ordem da INSTITUIÇÃO emitir a guia e recolher em favor do Estado do Rio de Janeiro, sede e origem do INSTITUTO, o imposto sobre transmissão causa mortis e doação (“ITCMD”) devido sobre o valor recebido, quando cabível, comprometendo-se o INSTITUTO a encaminhar o respectivo comprovante de pagamento do ITCMD à INSTITUIÇÃO, por meio eletrônico, até o quinto dia útil subsequente à data de recolhimento do mencionado imposto.
15.2 Na eventualidade de a INSTITUIÇÃO ser detentora de certificação de imunidade ou isenção quanto ao recolhimento do ITCMD, devidamente emitida pelo Estado do Rio de Janeiro nos devidos termos da Lei nº 7.174/2015, que regulamenta a matéria, caberá à INSTITUIÇÃO enviar ao INSTITUTO, no ato de assinatura do TERMO DE COOPERAÇÃO, o mencionado documento comprobatório de isenção ou imunidade, visando com isso evitar a retenção e recolhimento do ITCMD devido, nos termos previsto no item 15.1 acima.
15.3 Ainda em relação ao ITCMD, os PARTÍCIPES acordam que, na eventualidade de a INSTITUIÇÃO não dispor de certificação de imunidade ou isenção emitida pelo Estado do Rio de Janeiro, na forma prevista pela Lei 7.174/2015, poderá, alternativamente e a seu exclusivo critério de decisão e responsabilidade, firmar e apresentar ao INSTITUTO, no ato de assinatura do TERMO DE COOPERAÇÃO, Termo Autodeclaratório de Isenção, nos termos do que dispõe o Decreto 47.031/2020, sendo certo que, na eventualidade da autoridade fazendária competente do Rio de Janeiro entender, a seu exclusivo critério de avaliação e julgamento e num eventual procedimento de fiscalização, que a INSTITUIÇÃO não está enquadrada nos liames do mencionado Decreto de concessão do direito à isenção, se responsabilizará a INSTITUIÇÃO pelo imediato pagamento de referido imposto.
15.4 Caberá exclusivamente a INSTITUIÇÃO manter o INSTITUTO regularmente informado a respeito de qualquer ocorrência prevista no item 15.3 acima, bem como manter indene o INSTITUTO quanto a responsabilidade, cobrança ou qualquer outra obrigação decorrente do tributo referido no item 15.1.
CLÁUSULA DÉCIMA SEXTA - DA EXECUÇÃO
16.1 É vedado o aditamento do TERMO DE COOPERAÇÃO com o intuito de alterar seu objeto, entendida como tal a modificação, ainda que parcial, da finalidade definida no plano de trabalho, mesmo que não haja alteração da classificação econômica da despesa, observado o disposto na cláusula 1.2.
CLÁUSULA DÉCIMA SÉTIMA – DAS CONDUTAS VEDADAS
17.1 É vedado às partes:
a) Alterar o objeto do convênio deste TERMO DE COOPERAÇÃO;
b) Realizar despesa em data anterior à vigência do TERMO DE COOPERAÇÃO, ressalvado o disposto na cláusula 2.1;
c) Realizar despesas com publicidade, salvo a de caráter educativo, informativo ou de orientação social, da qual não constem nomes, símbolos ou imagens que caracterizem promoção pessoal e desde que sejam observadas as disposições do TERMO DE COOPERAÇÃO, especialmente as Cláusulas Quinta e Sexta deste documento.
CLÁUSULA DÉCIMA OITAVA - DA DENÚNCIA E DA RESCISÃO
18.1 O do TERMO DE COOPERAÇÃO poderá ser denunciado por qualquer dos PARTÍCIPES, mediante aviso prévio, por escrito, com 30 (trinta) dias de antecedência ou rescindido, de pleno direito, no caso de inadimplência de suas cláusulas, por quaisquer dos PARTÍCIPES.
18.2 O TERMO DE COOPERAÇÃO poderá ser rescindido por inadimplência de quaisquer de suas cláusulas. Neste caso, qualquer dos PARTÍCIPES adimplentes poderá encaminhar um prévio aviso ao Partícipe inadimplente para saneamento da falta em até 15 (quinze) dias, sob pena de, não sendo sanada a falta neste período, o TERMO DE COOPERAÇÃO ser rescindido imediatamente de plano direito.
18.3 Caso o TERMO DE COOPERAÇÃO seja rescindido imotivadamente pela FUNDAÇÃO ou rescindido por inadimplência da FUNDAÇÃO, deverá a FUNDAÇÃO devolver imediatamente ao INSTITUTO os valores por este já desembolsados, mas ainda não aplicados no PROJETO. Se a rescisão imotivada ou o inadimplemento partir da INSTITUIÇÃO ou do (a) COORDENADOR (A), deverá o Partícipe que rescindir imotivadamente ou estiver inadimplente devolver ao INSTITUTO os valores por este até então desembolsados, desde a data do desembolso até a data da efetiva devolução. Se a rescisão imotivada ou inadimplência partir do INSTITUTO, perderá ele os valores até então desembolsados, sem prejuízo da sua obrigação de desembolsar, imediatamente após a rescisão, o saldo ainda não desembolsado do valor referido na cláusula 3.1.
18.4 Poderá ainda ser rescindido o TERMO DE COOPERAÇÃO por motivo de força maior na forma da legislação aplicável, ou impossibilidade de sua execução por ato da autoridade competente, respeitados os compromissos já em vigor. Em caso de denúncia ou rescisão na forma desta cláusula, as partes responsabilizar-se-ão pelas obrigações surgidas enquanto o TERMO DE COOPERAÇÃO estiver em vigor e gozarão dos benefícios adquiridos no mesmo período.
18.5 Poderá também o INSTITUTO a seu exclusivo critério e sem que isso se caracterize como infração contratual, rescindir o TERMO DE COOPERAÇÃO de forma imediata e unilateral, nas hipóteses de o (a) COORDENADOR (A) inadimplir com suas atividades contratuais, bem como, comprovadamente e na execução do PROJETO, praticar ou permitir que se pratique, na exercício das atividades de sua equipe de pesquisa, atos ou ações que caracterizem infração às normas legais de proteção à propriedade intelectual vigentes, bem como aquelas que possam ser consideradas como assédio moral ou sexual, nas formas previstas em lei.
18.5.1 Neste caso, deverá a FUNDAÇÃO devolver imediatamente ao INSTITUTO os valores por este já desembolsados, mas ainda não aplicados no PROJETO, ficando o INSTITUTO desobrigado de realizar qualquer desembolso adicional.
18.6 Os PARTÍCIPES declaram-se aptos e capazes a assinatura do TERMO DE COOPERAÇÃO e possuem todas as condições e poderes necessários à assinatura, formalização, cumprimento e execução do mesmo, sendo que, todas as obrigações aqui assumidas foram devidamente autorizadas pelos PARTÍCIPES, não havendo dúvidas acerca da legalidade e validade do presente instrumento.
18.7 Em havendo nulidade de qualquer estipulação do TERMO DE COOPERAÇÃO, restarão válidas as demais disposições contratuais, não afetando assim a validade do negócio jurídico ora firmado em suas disposições gerais.
18.8 A tolerância dos PARTÍCIPES com relação ao não cumprimento de alguma cláusula do TERMO DE COOPERAÇÃO será considerada mera liberalidade, não implicando sua renúncia ou novação, podendo ser exigido seu cumprimento posteriormente, a qualquer tempo.
CLÁUSULA DÉCIMA NONA - DAS DISPOSIÇÕES GERAIS
19.1 O TERMO DE COOPERAÇÃO não estabelece qualquer relação de agenciamento ou representação legal, contrato de sociedade, vínculo associativo, prestação de serviços ou outro negócio similar. Nenhum dos PARTÍCIPES estará autorizado ou habilitado a atuar como agente, subordinado, mandatário ou representante de qualquer dos PARTÍCIPES, seja de forma individual ou coletiva, nem a efetuar transações ou incorrer obrigações em nome ou por conta de quaisquer PARTÍCIPES. Nenhum dos PARTÍCIPES se referirá ou tratará o TERMO DE COOPERAÇÃO como uma sociedade legal ou tomará nenhuma ação congruente com tal intenção. Os atos, declarações ou conduta de qualquer dos PARTÍCIPES não serão vinculantes ou oponíveis aos outros.
19.2 A tolerância dos PARTÍCIPES com relação ao não cumprimento de quaisquer das cláusulas do TERMO DE COOPERAÇÃO, será considerada mera liberalidade, não implicando sua renúncia ou novação, podendo ser exigido seu cumprimento posteriormente, a qualquer tempo.
CLÁUSULA VIGÉSIMA - DO FORO
Fica eleito o foro da Seção Judiciária da Justiça Federal do Rio de Janeiro - RJ, como competente para dirimir quaisquer dúvidas ou demandas oriundas do presente TERMO DE COOPERAÇÃO, com expressa renúncia de qualquer outro, por mais privilegiado que seja.
E, por estarem justos e avençados, os PARTÍCIPES assinam o presente instrumento, para um só efeito, em 04 (quatro) vias de igual teor e forma, na presença das testemunhas a seguir qualificadas.
Rio de Janeiro, 15 de julho de 2020.
INSTITUTO SERRAPILHEIRA
Xxxx Xxxxxxxxx / Xxxxxx Xxxx Xxxxx xx Xxxxxx et d' Audenhove
FUNDAÇÃO UNIVERSIDADE FEDERAL DE SÃO CARLOS
Xxxxx Xxxxxxxxx Xxxxxxx Xxxxxxxx
FUNDAÇÃO XXXXXX XXXXXXXXX
Xxxxxxx Xxxx
COORDENADOR (A)
Xxxxxxx Xxxxx
JURÍDICO – SERRAPILHEIRA
Xxxxxx Xxxxxx
TESTEMUNHAS:
1. 2.
Nome: Xxxxxxx Xxxxxx Nome: Isabel Domingues
CPF: 000.000.000-00 CPF: 000.000.000-00
ANEXO I - Projeto
RESEARCH PROJECT SERRAPILHEIRA YOUNG INVESTIGATOR - CALL 3
Automatic Evolution of Deep Neural Networks
Prof. Dr. Xxxxxxx Xxxxx
Department of Computer Science Federal University of S˜ao Carlos - UFSCar
S˜ao Carlos, SP xxxxx@xxxxxx.xx
Contents
1 Statement of the research topic 1
2 Expected results 3
3 Methodology to overcome the challenges 5
3.1 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.1 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Grammar-based EAs for Evolving DNNs . . . . . . . . . . . . . . . . . . . . . . . 8
4 Schedule 10
5 Dissemination and evaluation 11
6 Requested Funding 13
Bibliography 14
1 Statement of the research topic
Classification is one of the main machine learning tasks and, hence, there is a large variety of classification algorithms available. However, in most real-world applications, the choice of a classification algorithm for a new data set or application domain is still mainly an ad-hoc decision. In this context, the use of meta-learning for algorithm recommendation is a very important research area with seminal work dating back more than 20 years.
Meta-learning can be defined as learning how to learn, which involves learning, from previous experience, what is the best machine learning algorithm (and its best hyperparameter setting) for a given data set [12]. Meta-learning systems for algorithm recommendation can be divided into two broad groups, namely: (a) systems that perform algorithm selection based on meta- features [12], which is the most investigated type; and (b) systems that search for the best possible classification algorithm in a given algorithm space [56], which is a current hot-topic of research. A limitation of meta-feature-based meta-learning research is that usually a small number of candidate classification algorithms are considered as meta-classes. This is because in general, the larger the number of candidate classification algorithms used as meta-classes, the more difficult it would be for the meta-classification algorithm to accurately predict all meta-classes. In addition, it is difficult to produce large meta-data sets for meta-learning, since to compute the meta-class of each meta-instance we need to run all candidate classification algorithms on all data sets. These problems are aggravated when each meta-class represents not just an algorithm, but
a <algorithm, hyper-parameters> pair, since the need to represent many such pairs greatly
increases the number of meta-classes and so the difficulty of the problem. To avoid this problem, many meta-learning systems perform algorithm selection using all candidate algorithms with their default hyper-parameter settings, which may not be a proper setting for most novel target data sets.
These difficulties have motivated research on the second type of meta-learning for algorithm recommendation, which is also known as Auto Machine Learning (AutoML) systems. AutoML systems use search or optimization methods to indicate the best classification algorithm for a given target data set, in a given algorithm space, with no human intervention. The focus of this project is on AutoML systems, which can be further sub-divided into two categories, namely Au- toML for algorithm configuration and AutoML for algorithm construction. Regarding algorithm configuration, each candidate solution in the search space consists of a candidate classification algorithm with its corresponding hyper-parameter settings. A search or optimization method is used to look for the best candidate solution to be recommended for a given data set provided as input, by using a measure of accuracy estimated on the training set as an evaluation function to guide the search. This can also be considered as a form of algorithm selection, but it is more usual to call it as algorithm configuration [28], to emphasize that the unit of selection is not just a classification algorithm but an algorithm with specific hyper-parameter settings [56, 20]. This is important because it is well-known that the predictive performance of classification algorithms tends to strongly depend on their hyper-parameter values. Note, however, that in this approach the set of candidate classification algorithms is pre-defined by the user. By contrast, in search- based AutoML for algorithm construction, there is no pre-defined set of candidate classification algorithms to select from. Rather, the AutoML system has the autonomy to create any kind of
classification algorithm (even algorithms that have never been proposed before) that are valid algorithms within the search-space of candidate algorithms [44, 7, 6, 8]. This search-space is implicitly defined by a set of modular algorithmic components and hyper-parameters, which act as “building blocks” that can be combined in many different ways.
Among the machine learning approaches, Deep Neural Networks (DNNs) have become the state-of-the-art approach for solving difficult problems, including computer vision, speech recog- nition, language processing, and other complex domains [53, 23, 27]. However, such notable results depend directly on the topology and parametrization of these systems, which can be challenging. Hand-designing a DNN requires expert knowledge and a lot of trial and error, especially as the difficulty of the problem grows. Still, much of the work in DNNs focuses on hand-crafting deep architectures, such as ResNeXt-101 (64x4d) [60], PolyNet [65], DenseNet [27], Inception-v4 [55], Xception [15], just to name a few.
In this context, several approaches have been developed to automatically optimize architec- tures and hyper-parameters of deep networks, such as MetaQNN [5], EAS [14], SMASHv2 [13], NASNet-A [66], PNASNet-5 [33], and others [9, 1, 29, 14]. Also, several Evolutionary Algorithms (EAs) have been used as search method for optimizing Deep Neural Networks (DNNs). Due to limited computational resources, most algorithms focus on specific parts of the design, such as
(i) hyper-parameters [37, 62, 26, 36], (ii) topology [19, 2, 51] or (iii) weights [41, 45, 50] of exist- ing DNNs. More recently, some approaches evolving multiple DNN aspects in a single EA have been proposed. In [38], it is introduced a Genetic Algorithm (GA) to optimize the topology and parameters of DNNs. A similar approach is proposed in [52], where DNN topologies and initial weights are optimized for image classification domain. Also, it is proposed CoDeepNEAT [40], an extension of neuroevolution to the DNN topology, components, and hyper-parameters. Other examples of evolutionary-based methods for optimizing DNNs – mostly Convolutional Neural Net- works (CNNs) – are: ECGP-CNN (ResSet) [51], GeNet-2 [59], Hierarchical [34], EvoCNN [52], LSE [46], AmoebaNet [47], and Memetic Evolution [35]. Most of these approaches are referred as Neural Architecture Search (NAS) [18, 58], because they focus on the network architecture optimization. In the other way, Auto Machine Learning (AutoML) methods are not restricted to architecture optimization but also in optimizing its hyper-parameters.
Recently, two related works proposed the algorithms DENSER [3] and FastDENSER [4], which are evolutionary approaches that combine Genetic Algorithm (GA) with Grammatical Evolution (GE) for evolving CNNs. GA is used to encode sequences of layers indicating the grammatical starting symbols of a grammar, which is used by a GE to evolved valid sequence of layers. The GA’s individuals are represented by the structure [(features, 1, 10), (classification, 1, 2), (softmax, 1, 1), and (learning, 1, 1)], where each tuple indicates a starting symbol, and the minimum and maximum number of times they can be used. Then, the algorithms evolve networks with up to 10 convolution or pooling layers, followed by up to 2 fully-connected layers, and ending with a classification layer softmax.
Besides the maximum number of layers, it is possible to note some additional limitations in these approaches. First, some important hyper-parameters – such as kernel size, stride, and batch size – have their values already fixed in the grammar, i.e., they are not optimized. The grammar allows the choice of different optimizers, namely Gradient (with or without Nesterov), RMSProp and Xxxx, but their parameters (e.g., learning rate) are fixed values. Second, and very important
limitation, is a restriction imposed on the search space of architectures, since micro-architecture modules, also known as Network-in-Network (NiN), (a very important contribution of the-state- of-art deep networks) are not supported. Thus, DENSAR and FastDENSER evolve essentially VGG-like architectures [48], which are not among the state-of-the-art networks.
Thus, we proposed in this project to use Grammar-based Evolutionary Algorithms, more precisely Grammatical Evolution (GE) and Grammar-based Genetic Programming (GGP), to automatically evolve DNN architectures and their hyper-parameters, taking into account the newest contributions from the state-of-the-art DNNs, such as NiN architectures, which enabled the construction of important micro-architecture modules, like Inception, Residual, and Dense blocks. The focus on this four-year period project is automatically evolve CNNs to solve Image Classification problems. However, the proposed methodology can be easily extended to consider other types of DNNs and applications.
We propose to use an evolutionary algorithm because its use is feasible and justifiable when the problem being solved is non-convex, and the search space prevents an analytical or approxi- mate solution. This is clearly the case we are dealing when optimizing architectures and hyper- parameters of DNNs. Also, besides being very easy to code, evolutionary algorithms are more suitable than other meta-heuristics for discrete optimization problems. Our problem is a discrete optimization problem where the variables are defined as binary numbers (which can be directly used in GE or mapped from GGP), and we search for the best combination in the set of vari- ables. It is thus a combinatorial optimization problem, where evolutionary algorithms have been successfully applied over many years, including NAS and AutoML problems. Another advantage of using evolutionary algorithms here is that they are very suitable for problems with many lo- cal optima. This is our case since we can have different networks resulting the same predictive performances (rates), i.e., the same rate can be obtained from different combinations of which examples (images) are correct/incorrect classified.
The importance and the high potential of this project can be verified in [49], a very recent paper published by Nature Machine Intelligence. This paper presents several key aspects of modern neuroevolution (an alternative term used to refer to“evolution of deep neural networks”), including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search. All these challenges are covered by this project, showing its high potential of contribution and that it is in the state of the art in the fields of machine learning and data science.
2 Expected results
We expect the following contributions to the state-of-the-art:
• First year: Development of grammar-based evolutionary algorithms, more specifically Gram- matical Evolution (GE) and Grammar-based Genetic Programming (GGP), to guide the search for new Deep Neural Networks (DNNs).
It is important to highlight that, different from the other EA-based approaches to evolve DNNs, operators such as crossover and mutation do not operate directly on the networks but on the vectors (GE) and the trees (GGP) instead, significantly reducing the computational cost. This is an important contribution.
• Second year: Development of new strategies to reduce the computational cost of Evolutionary Algorithms combined with Deep Neural Networks. Incorporating sampling strategies and small number of epochs, in order to reduce the training data during the evolution, and exploring GPU- based technologies available on the requested GPU server are examples of these strategies. A recently published benchmark data set will also be used to speed-up the fitness evaluation, as we will see in Section 5.
• Third and fourth years: Taking into account the knowledge used in the state-of-the-art deep neural networks, which may be done by including their main innovations (mostly micro- architecture modules) into the search space of our approach. The idea of incorporating NiN architectures will be progressively implemented in three different ways:
(i) Evolving deep neural networks by combining regular layers with micro-architecture mod- ules used by the state-of-the-art networks, such as Inception, Residual, and Dense blocks. In this idea, the search space is restricted to existing blocks with their original configura- tions, i.e., it will not be possible to optimize parameters like kernel size, padding, stride, and others.
(ii) Incorporating into the previous approach the possibility to configure layers and blocks with different parameters. In this idea, the composition of the existing blocks (e.g., Inception) would remain the same, but their internal structures (such as convolution and pooling layers) could be parameterized with different values, i.e., different values for kernel size, padding, stride, and other parameters.
(iii) Allowing to evolve deep neural networks composed of micro-architecture modules never seem in the literature, i.e., blocks composed of existing structures (convolution and pooling layers, for example), but in different quantity, parameter values and arrangement compared to the existing blocks.
• Implementing two options of evolution. (i) a DNN tailored to an specific Image data set (an specific approach); and (ii) a DNN tailored to multiple Image data sets (a general approach).
(i) In the first option (specific approach), the meta-training and meta-test sets are the con- ventional training and test data obtained from a given Image data set.
(ii) In the general approach, we have multiple Image data sets (or part of them) comprising the meta-training set, and possibly multiple (but different) Image data sets (or part of them) comprising the meta-test set. This approach can be employed with the purpose of designing a DNN tailored to a particular application domain.
We believe that it is possible to reach current state-of-the-art performances, becoming the first to do it automatically [61]. Also, the evolved networks can be smaller (more compact) than the state-of-the-art ones, which would be also a significant contribution. In the best scenario, our approach will automatically evolve smaller deep neural networks beating the current state-of-the- art ones regarding predictive performance. However, it would be great if we reach any of these contribution for at least one data set.
We also consider that training human resources is a very important contribution of this project. Thus, we plan to successfully involve at least 1 PhD thesis related to researches developed within
the project. There are also other topics open to graduate and undergraduate students, and once we have candidates we will apply for additional scholarships (mainly Post-Docs) related to the project during its execution.
3 Methodology to overcome the challenges
3.1 Deep Learning
Deep learning is part of broader family of machine learning methods, which aims to model high- level abstractions from raw data using architectures composed of multiple non-linear transforma- tions [16]. Its basic principle is to use a complex hierarchical structure for data representation learning in a manner similar to human brain activity [11].
Deep learning algorithms are based on distributed representations, whose premise is that ob- served data are generated from the interaction of many different factors at multiple levels [10]. Therefore, such algorithms make the assumption that these factors can be obtained by compos- ing multiple non-linear transformations, which correspond to different levels of abstraction or composition [11].
The number of levels of composition of non-linear operations refers to the depth of architecture. Most learning algorithms are limited to 1, 2 or 3 levels, corresponding to shallow architectures. Just to have an idea, the human brain is organized in a deep architecture, where a given input percept is represented at many levels of abstraction, each of them corresponding to a different area of cortex [11].
Inspired by the complex neurological structure of human brains, prior studies have focused on training strategies for deep learning architectures [57]. However, they have reported positive results for two or three levels, but poor results with more levels [11]. Later, significant advances were obtained by the work of Xxxxxx et al. [25], which introduced the Deep Belief Networks (DBN). When learning these deep neural networks, each hidden layer can be trained in a greedy fashion, layer by layer, using an unsupervised learning algorithm. The algorithm used to train each hidden layer can be, for example, a Restricted Boltzman Machine (RBM)or an Xxxx-xxxxxxx.Xx classifi- cation tasks, the unsupervised learning is used as an initialization step to construct supervised deep neural networks. After this first unsupervised step, the networks go through a fine tuning procedure aimed at adjusting the weights to the final global objective (classification). This fine tuning is performed using a supervised algorithm, such as backpropagation [32].
Nowadays, deep learning methods are widely used in many research areas, specially in com- puter vision tasks. According to [22], these methods can be divided into four categories: Re- stricted Boltzmann Machine (RBM), Autoencoder, Sparse Coding, and Convolutional Neural Network (CNN). This project will focus on CNNs, the most used ones for Image classification.
3.1.1 Convolutional Neural Networks
Among all the categories of the deep learning methods, Convolutional Neural Networks (CNNs) are the most popular for pattern recognition and video analysis. They have been used both for feature learning and in end-to-end approaches [17]. The key advantage of CNNs is that they are specialized for processing grid-like topology data, such as regular time intervals (1-D grid of samples) and images (2-D grid of pixels) [21].
The pipeline of the general CNN architecture is shown in Figure 1. Usually, a CNN is composed
Dog Person Cat
Convolution
Max pooling
Bird Fish Fox
Convolutional layers + Pooling layers Fully connected layers
Figure 1: The pipeline of the general CNN architecture [22].
of three main kinds of layers [22]: (1) convolutional layers, which use various kernels to convolve the input data and produce different feature maps; (2) pooling layers, which are used to reduce the dimensions of feature maps and network parameters; and (3) fully connected layers, which convert multidimensional feature maps into a unidimensional feature vector, enabling to obtain an output vector with a pre-defined length.
The main benefits of convolutional layers are: (i) to share the weights of a same feature map, thus reducing the number of parameters; and (ii) to learn correlations among neighboring pixels, thus preserving local connectivity. Similar to convolutional layers, the pooling layers are also translation invariant. Popular pooling strategies are average and max pooling. Other well-known approaches are: (i) stochastic pooling [63], where the activations taken from each pooling region are picked at random according to a multinomial distribution; (ii) spatial pyramid pooling [24], where fixed-size representations are extracted from input data of an arbitrary size; and (iii) def- pooling [43], which can learn the deformation constraint and geometric model of visual patterns. Fully-connected layers perform like a traditional neural network and contain about 90% of the parameters in a CNN, comprising most of the computational costs for training [22].
There are two stages for training a CNN [22]: a forward stage, where the current parameters (weights and biases) are used to extract appropriate representations from input data; and a backward stage, where all the parameters are updated based on the differences between results observed in the data (ground truth) and estimated by the model (predictions). After sufficient iterations of the forward and backward stages, the learning process can be stopped.
Deep architectures can learn more abstract information than shallow ones. However, the more complex a model, the more prone it is to overfitting. In this case, regularization is a key component in preventing overfitting. Many regularization methods have been proposed in the literature. The most used are: (i) dropout, where some of the feature maps are randomly omitted thus preventing complex co-adaptations on the training data; (ii) data augmentation, where class-preserving transformations are used to generate copies of input data, thus increasing the size of training sets; and (iii) pre-training and fine-tuning, where the networks are initialized with pre-trained parameters [22].
In recent years, several CNNs have been proposed in the literature, such as LeNet-5 [53], Xxxx-Net [31], VGG-16 and VGG-19 [48], Inception-V1 [53], Inception-V2 and Inception-V3 [54],
ResNet-50 [23], Xception [15], Inception-V4 and Inception-ResNet-V2 [55], ResNeXt-50 [60], just to name a few. These networks have become very deep and extremely difficult to visualize. However, Szegedy et al. [55] emphasize that most of this progress is not just the result of bigger models, larger data sets and more powerful hardware, but mainly a consequence of new ideas, algorithms and improved architectures. Some of the most used deep neural networks are VGG- like ones, such as VGG-16 and VGG-19. Developed by Visual Geometry Group, VGG-16 has 3
convolutional and 3 fully-connected layers. The main contribution of this network is its depth, around twice as deep as AlexNet. It also uses Rectified Linear Units (ReLUs) as activation functions and overlapping pooling, both introduced by AlexNet [31].
In [53], the authors introduced GoogleNet, also known as Inception-V1. This is the first work to build networks using dense modules (also called blocks). In contrast with VGG-like networks, the Inception module introduced parallelism, increasing the “width” of the network and keeping the computational budget constant. Having parallel towers of convolutions with different filters, followed by concatenation, captures different features at 1 × 1, 3 × 3 and 5 × 5, thus “clustering”
them. The 1 × 1 convolutions are used for dimensionality reduction to remove computational bot-
tlenecks, adding non-linearity within a convolution. Also, two auxiliary classifiers are introduced to stimulate previously discrimination, to increase the gradient signal in the backward stage, and to provide additional regularization.
Szegedy et al. [54] introduced Inception-V3, which is a successor to Inception-V1. Thanks to the combination of lower parameter count and additional regularization with batch-normalized auxiliary classifiers and label-smoothing, the authors showed how factorizing convolutions and aggressive dimension reductions could result in networks with relatively low computational cost without loosing high quality. One of the first architectures to use batch normalization, Inception-
V3 has improved from Inception-V1 in the following ways: (i) factorizing n × n convolutions into asymmetric convolutions 1 × n and n × 1; (ii) factorizing 5 × 5 convolutions to two 3 × 3 convolutions operations; and (iii) replacing 7 × 7 to a series of 3 × 3 convolutions.
From the above-mentioned CNN architectures, we can note that most modifications are con- cerned to increase the number of layers in order to achieve better performances. However, He et al. [23], from Microsoft Research, proposed an even deeper CNN architecture called ResNet, which popularized skip connections, also known as shortcut connections and, mainly, residuals. According to He et al. [23], “when deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated and then degrades rapidly”. The authors solved this problem by using residual blocks , becoming the new state-of-the-art performance, at that time, for ImageNet data set.
ResNet gained notoriety in the research community, and several researchers proposed new architectures based on it. By adding parallel towers/branches/paths within each block of ResNet, Xie et al. [60] introduced ResNeXt, a variant of ResNet. In [27], the authors proposed DenseNet (Dense Convolutional Network), which connects each layer to every other layer in a feed-forward fashion. In practices, each layer receives a“collective knowledge” from all preceding layers. Instead of drawing representational power from extremely deep or wide architectures, as in Inception and ResNet, DenseNets exploit the potential of the network through feature reuse, resulting in simpler models that are easier to train. Concatenating feature-maps learned by different layers increases variation in the input of subsequent layers and improves efficiency when compared with ResNets. Compared to Inception networks, DenseNets are simpler and more efficient.
Also, Szegedy et al. [55] proposed some modifications from previous versions of Inception, yielding Inception-V4. These modifications included making some changes in Stem module and using the same number of filters for every Inception modules, besides adding more of them. Additionally, the authors introduced Inception-ResNet architectures, versions V1 and V2, by combining the ideas of Inception and ResNet. The Inception blocks were converted to Residual
Inception blocks. Also, more Inception modules were added to the network besides a different In- ception block (Inception-A) after the Stem module. The authors mentioned that “having residual connections leads to dramatically improved training speed”.
In the next section, we will introduce basic concepts about Grammar-based Evolutionary Algorithms, in particular Grammatical Evolution (GE) and Grammar-based Genetic Program- ming (GGP), and how they can be applied to automatically construct new deep neural network architecture and its hyper-parameters. The proposed approach can be classified as either neu- roevolution or AutoML. The first because it searches for new architectures, and the later because besides new models it also searches for their hyper-parameters.
3.2 Grammar-based EAs for Evolving DNNs
Grammar formalisms are important representation structures in computer science. Their main purpose is to represent constraints on general domains - for example, they are able to define which expressions are valid in a programming language, or alternatively to control type constraints. Since we are dealing with GP, it is reasonable to assume that we may need to restrain the valid expressions of the computer program we are evolving, or the type of data its functions receive as inputIndeed, using grammars for handling GP individuals is, nowadays, one of the most widely applied GP methods [39]. Grammar-based (or grammar-guided) GP (GGP) differs from standard GP regarding, naturally, the use of a grammar. Individuals in GGP are usually represented by a derivation tree generated by the given grammar G. Context-free grammars (CFG) are the most commonly used in GGP. There are at least three benefits of using GGP [39]: (i) restricting the search space and respecting the closure property; (ii) homologous operator; and (iii) flexibility.
GGP systems that implement the production-rule-sequence-encoding scheme need a mapping process to decode the string chromosomes into derivation trees, usually named GPM (genotype- phenotype mapping). Some allegedly benefits of this approach are the unconstrained search of the genotype while ensuring phenotype validity, and enhancing genetic diversity by allowing mutations which are neutral with respect to the phenotype (various genotypes can represent the same phenotype) [42]. This approach was named Grammatical Evolution (GE) [42].
FITNESS EVALUATION
INDIVIDUAL
SAMPLING DNN and
STRATEGY Hyper-Parameters
LEARNING ALGORITHM
TRAINING
DNN
VALIDATION
STATS
70%
30%
DATA SET 90% META-TRAINING SET
10%
LEARNING ALGORITHM
TRAINING 1
TRAINING 2
...
TRAINING n
DNN1
DNN2
...
DNNn
VALIDATION 1
STATS 1
VALIDATION 2
...
VALIDATION n
STATS 2
...
STATS n
SAMPLING DNN and
STRATEGY Hyperparameters
INDIVIDUAL
FITNESS EVALUATION
META-TRAINING SET
META-TEST SET
(a)
(b)
Figure 2: Fitness evaluation schemes from (a) one data set in the meta-training set and (b) multiple data sets in the meta-training set.
In order to guide the evolutionary process, we intend to implement two strategies for the fitness evaluation in order to automatically design: (i) a DNN tailored to only one specific data set (an specific approach); and (ii) a DNN tailored to multiple data sets (a general approach). In the first strategy, Figure 2a, the meta-training and meta-test sets are the conventional training and test data obtained from a given data set for machine learning experiments. Note that, in this case, the training and test sets are subsets of the same original data set. More precisely, the
training and test sets are described by exactly the same set of predictor attributes, but they have different sets of instances (with no overlapping).
In the general approach, Figure 2b, we have multiple data sets comprising the meta-training set, and possibly multiple (but different) data sets comprising the meta-test set. In this approach, each data set is described by a different set of predictive attributes, so each data set corresponds to a different classification problem. This approach can be employed with two different purposes. The first one is automatically designing a DNN that performs reasonably well in a wide variety of data sets. In other words, the evolved DNN will be applied to data sets with very different structural characteristics (i.e., very different numbers and types of predictor attributes) and/or from very distinct application domains (e.g., data sets from medicine, finance, marketing, etc). For this scenario, the user chooses these distinct data sets that will be part of the meta-training set, in the hope that evolution is capable of generating a DNN that performs well in a wide range of data sets, i.e., robust networks. We know that it is very difficult (not to say impossible) to evolve these robust networks, but it is an hypothesis that can be verified. The second purpose is to design a DNN tailored to a particular application domain or to a specific data distribution profile describing a well-defined type of data sets. In this scenario, the meta-training set is comprised of data sets that share similarities, and so the evolved DNN will be tailored to solving a specific type of classification problem. Unlike the previous strategy, in this case we have to define a similarity criterion for creating specialized algorithms, which is not trivial. We highlight here some possible similarity criteria: (i) choosing data sets that share the same application domain (e.g., Image);
(ii) choosing data sets with provenance resemblance (e.g., data sets generated from data collected by a specific sensor or set of sensors); and (iii) choosing data sets with structural resemblance (e.g., data sets with statistically-similar features and/or with similar geometrical complexity).
The general idea of this project is to use GGP and GE to automatically evolve Deep Neural Networks and its hyper-parameters. Each individual (candidate solution) would represent vari- ations of a “complete process” of a DNN construction, such as “learning algorithms” , “sampling strategies”, “architecture”, and “topology”. However, considering the enormous search space in- volved on it, we have decided to (i) fixing the gradient-based optimizer (Learning Algorithm) to an specific one (Xxxx [30] or the recently published Lookadhead [64]) and varying the learning rate; (ii) evolving “only” Convolutional Neural Networks (CNNs); and (iii) applying to an specific application domain: Image Classification.
It is important to highlight, however, that once we prove the success of our approach in this restricted scenario, we can extend it to support other types of Deep Neural Networks, such as Recurrent Neural Networks (RNNs), to solve other problems (Image or not) such as Speech Recog- nition, Natural Language Processing, Image Classification with Localization, Object Detection, Image Style Transfer, Image Reconstruction, and others.
To end this section, we would like to illustrate in Figure 3 a draft of a grammar to be developed in this project. Considering the three different levels of search-space proposed in Section 2, we also illustrate examples of resulting DNNs. Figure 4 shows a new CNN composed of existing blocks, such as Stem-A, Inception-C, an Reduction-B, which are configured with their original parameters. Figure 5a shows a block that can appear in the evolved DNNs when incorporating the possibility to configure existing “blocks” with different parameters. Finally, Figure 5b shows a completely new block generated when allowing to evolve new “block structures” and their parameters.
< newDNN > | ::= | < sampling > < architecture > ... < learning > |
< sampling > | ::= | (Random sampling) | (Stratified random sampling) | (Systematic sampling) | ... ) |
< architecture > | ::= | < stem > < architecture > < classifier > | < block > < architecture > | < block > | ... |
< classifier > | ::= | < topology > < classifier > < neurons > | < neurons > | ... |
< topology > | ::= | Fully-Connected | ... |
< steam > | ::= | Stem-A | Stem-B | ... |
< neurons > | ::= | < number > < activation > |
< activation > | ::= | Relu | Sigmoid | ... |
< number > | ::= | < digit > < number > | < digit > |
< digit > | ::= | 0 ... 9 |
< block > | ::= | < inception > | < dense > | < residual > | < vgg > | < reduction > | ... |
< inception > | ::= | Inception-A | Inception-B | Inception-C | ... |
< reduction > | ::= | Reduction-A | Reduction-B | Reduction-C | ... |
... | ... | |
< pooling > | ::= | < type > < size > |
< type > | ::= | < average > | < min > | < max > | < median > | ... |
< size > | ::= | 3 | 5 | 7 | ... |
... | ... |
Figure 3: Example of rules to be used as part of our grammar.
Reduction−B
avg−pool 5x5
conv 1x1
global avg−pool
Inception−A
Reduction−C
Inception−B
OUT
x2 x3 x2
IN
stem−A
Inception−C
avg−pool 5x5
Figure 4: Example of a new CNN composed of some already existing blocks.
x2
max−pool 5x5
conv 1x1
conv 7x7
concat
conv 3x3
Conv 3x3
conv 1x1
conv 3x3
x2
conv 5x5
conv 3x3
conv 3x3
avg−pool 3x3
conv 3x3
conv 3x3
max−pool 3x3
conv 1x1
concat
conv 1x1
(a) Reduction* (b) Inception*
Figure 5: Examples of completely new blocks generated by varying kernel sizes of Reduction-C (Reduction*), and varying kernel sizes, adding filters and more convolutional layers (Inception*).
4 Schedule
Considering a period of 48 months (8 semesters), we propose the following activities and timetable (Table 1) for the development of this research project:
• A1 - Comprehensive Literature review: A continuous literature review must be performed since Deep Learning and NAS/AutoML are hot-topics and a very active area.
• A2 - Definition of building blocks: Based on a deep study in the literature, we will define which building blocks (components) of DNN construction will be used to automatically evolve the deep/convolutional neural network. Some examples of components were already mentioned in the previous sections, such as sampling strategies (to deal with computational cost), topology (number of layers, activate functions, type of connections, etc.), architecture (blocks/modules from existing CNNs), and others. We will define a complete set of building blocks during the first year, although we schedule updates once new contributions may emerge.
• A3 - Development of a grammar: Construct a complete set of rules to compose grammars that ensure feasible individuals for both approaches: GE and GGP. Also, the grammar must
support parallelism, enabling the state-of-the-art networks to be derived from it. Once a com- plete set of rules is defined, we will construct different compact grammars to verify the impact of different search spaces for the optimization problem, as mentioned in Section 2.
• A4 - Implementation: During this activity we will implement the techniques to be used in the experiments. We can use different programing languages and frameworks, such as Java, Python, R, C, Cuda, Caffe, TensorFLow and PyTorch.
• A5 - Experimental plan: Before starting the experiments, we intent to define a detailed protocol of the experiments to be run, such as the data sets to be used, the data analysis and preparation, the evaluation metrics, statistical tests, etc. This activity is very important because any mistake can represent a high cost to run all experiments again.
• A6 - Experiments: Run the experiments according to the previously defined plan. Different experiments will be run according the evolution of the project. From the second year, it is important to highlight the crucial role of the requested GPU servers. A considerable number of GPUs is essential for meeting the schedule.
• A7 - Analysis and discussion: The experimental results will be evaluated by well-known predictive performance metrics, such as Accuracy and F-score, which will be obtained by using different techniques for estimating the predictive performances, such as holdout and cross- validation. We also intend to evaluate the results in terms of complexity of the networks and computational cost.
• A8 - Research visits: We intend to make national and international research collabora- tions.Thus, it is planned research meetings between S˜ao Carlos, S˜ao Jos´e dos Campos (UNIFESP) and UK (University of Nottingham), in order to facilitate the development of the different ac- tivities, including applications in big data scenarios.
Table 1: Work plan.
4th Year
3rd Year
2nd Year
1st Year
8
7
6
5
4
3
2
1
A1 A2 A3 A4 A5 A6 A7 A8
5 Dissemination and evaluation
In order to disseminate the results obtained in this project, we will submit academic papers in leading scientific journals – such as IEEE Transactions on Evolutionary Computation, Pattern Recognition and Machine Learning – and proceedings of conferences in the field of computational intelligence, including the Conference on Knowledge Discovery and Data Mining (SIGKDD), Eu- ropean Conference on Machine Learning (ECML), International Conference on Machine Learning
(ICML), Annual Conference on Neural Information Processing Systems (NIPS), and The Genetic and Evolutionary Computation Conference (GECCO). This will guarantee the quality and suc- cess of this project by means of peer-review processes. Also, seminars will be presented during technical visits to other research institutes in Brazil and abroad as a way to establish new collab- orations. Since this project is open to the involvement of students, it is expected that the results will also be reported as coursework texts, dissertations and thesis.
The evaluation will be done in twofold. First, the idea is to evaluate the proposed approach as a NAS/neuroevolution method. It will be done by using a recent released benchmark called NASBench [61], which is the first public architecture data set for NAS research. The authors constructed a search space to identify 423k unique convolutional architectures, all of them trained and evaluated multiple times on CIFAR-10 data set. The results were compiled into a large data set of over 5 million trained models. As mentioned in Section 2, NASBench can also be used in this project to speed-up the evolutionary process. If a given individual represents an architecture/network already evaluated in NASBench, we can just use the performance metrics available in NASBench data set to compute the fitness function, without the need of spending time training the network. Second, we will evaluate the quality of the networks evolved by our approach. This evaluation will be done considering both fitness evaluation schemes, presented in Figure 2. For multiple data sets (Figure 2b we will use smaller data sets during the evolution and then training the resulting networks with the larger data sets, in a “progressive” way according to the complexity of the data set. Among the larger data sets, we will start training the networks with the simplest one (the one that requires less time to train) and, if there is enough time, until the more complex one. For single evaluation (Figure 2a), we can ensure that, during the execution of this project, we will evolve convolutional neural networks only for the smaller data sets. Based on the proposed strategies for speed-up the evolution, we will try to automatically evolve CNNs for the larger data sets, but we know that it can consume months of experiments.
Table 2 shows a basic classification of the data sets as smaller (order of megabytes) and larger (order of gigabytes). These data sets are widely-used sets for Image Classification with deep learning networks. For some Image data sets, such as ImageNet and MS-COCO, the images vary in dimensions and resolution. Many applications resize/crop all of the images to an specific dimension/resolution, like 256x256 pixels.
Table 2: Summary of well-known Image Classification data sets to be used in our experiments.
Set | Name | # Train | # Validation | # Test | Dimension | # Classes |
MNIST | 60,000 | - | 10,000 | 28 x 28 | 9 | |
CIFAR-10 | 50,000 | - | 10,000 | 32 x 32 | 10 | |
Smaller | CIFAR-100 | 50,000 | - | 10,000 | 32 x 32 | 100 |
Caltech 101 | 9,146 | - | - | 300 x 200 | 101 | |
Caltech 256 | 30,607 | - | - | 300 x 200 | 256 | |
SVHN | 73,257 + 531,131 | - | 26,032 | 32 x 32 x 3 | 10 | |
Larger | MS-COCO | 118,000 + 123,000 | 5,000 | 41,000 | vary | > 80 |
ImageNet | 14,197,122 | - | - | vary | 21,841 |
In order to evaluate the results of our approach – including the comparison with the state- of-the-art methods – we will use well-known predictive performance measures, such as Accuracy and F-score. We also intend to evaluate the evolved deep neural networks in terms of complexity, which can be measured, for example, by their depth or width.
6 Requested Funding
Given the huge number of features that can be used with the neural networks, we need to use GPUs to considerably speed up the execution of the methods. With the use GPUs, it is possible to execute deep neural networks involving millions of parameters using millions of instances and attributes in a few days. Without a GPU, such tasks could take months or even years of execution. The use of GPUs has been of paramount importance in the achievement of outstanding results in several machine learning tasks, since it allows fast implementations of deep neural network architectures. Thus, we are requesting three servers composed of several GPUs in order to run our experiments in a feasible time. Each requested GPU server is a Supermicro SYS-1029GQ- TRT composed of 2 Intel Xeon Platinum 8260, 192GB RAM DDR4, 1-4 HDD 1 TB SATA, and 4-8 NVIDIA Tesla T4 16GB.
Since we will mainly develop the approach during the first year, we are requesting two laptops to be used by the researchers for implementing the algorithms, running small-experiments, writing papers, and traveling to conferences and research visits. Since we already have a server with one GPU, we can test the new approach (with some resources) in this server during the first year, and we will need the GPU servers (with several GPUs) during the next three years.
Besides the above-mentioned machines, we are also requesting a competitive PhD scholarship (R$ 4.000,00 per month), and money for travelling to international conferences and for supporting research visits between S˜ao Carlos and potential collaborators, such as S˜ao Jos´e dos Campos (Prof. Ma´rcio P. Basgalupp, from UNIFESP) and Nottingham (Prof. Xxxxx Xxxxxxxxx, from University of Nottingham, UK). For these travels, we ask for R$ 30.000,00 per year. Also, we predict some costs for consuming items, computing maintenance, conference registration fees, etc. We call these costs as Extra Costs (R$ 8.000,00 per year), which will be also available to the PhD student.
Table 3 presents the schedule for the requested budget. In the first year, we intend to focus on the research visits, exchanging knowledge (mainly between Brazil and UK) by discussing different aspects of the project and developing the first resources of the algorithms. Then, we intend to use the money with the laptops and travel expenses. We also intend to support three international visits per year (two from Brazil to UK and one from UK to Brazil), besides the national ones. In the second year of this project, we plan to implement a PhD Scholarship (for three years, following FAPESP model), and buy the first GPU Server, which will be necessary to run the first experiments with“big” demand. Since we advance in the proposed activities, increasing the search space of the approach and the complex of the problems to be solved, we will need more GPU resources, that is why we will buy additional ones during the last years. The budget proposals for the requested equipments are attached in the end of this document.
Item | Year | |||||||
Laptops GPU Server Research visits PhD scholarship Extra costs | US$ 7.000 R$ 30.000 R$ 8.000 | US$ 35.000 R$ 30.000 R$ 48.000 R$ 8.000 | US$ 35.000 R$ 30.000 R$ 48.000 R$ 8.000 | US$ 35.000 R$ 30.000 R$ 48.000 R$ 8.000 | ||||
Sub-total | R$ 66.700 | R$ | 229.500 | R$ | 229.500 | R$ | 229.500 |
Table 3: Budget schedule. (Conversion rate: US$ 1 = R$ 4,10)
Bibliography
[1] Xxxxxx Xxxxxxx Xxxxx Xxxx, Xxxxx Xxxxx and Xxxxx Xxxxxx. Towards automated deep learning: Efficient joint neural ar- chitecture and hyperparameter search. In XXXX 0000 Xx- toML Workshop, JMLR Workshop and Conference Proceed- ings. XXXX.xxx, 2018.
[2] Xxxxxx Xxxxx¸c˜ao, Xxxx Xxxxxx¸co, Xxxxxxxx Xxxxxxx, and Xxxxxxxxxx Xxxxxxx. Evolving the topology of large scale deep neural networks. Genetic Programming, pages 19–34, 2018. XXXX 0000-0000.
[3] Xxxxxx Xxxxx¸c˜ao, Xxxx Xxxxxx¸co, Xxxxxxxx Xxxxxxx, and Xxxxxxxxxx Xxxxxxx. Denser: deep evolutionary network structured representation. Genetic Programming and Evolv- able Machines, 20(1):5–35, 2019. ISSN 1573-7632.
[4] Xxxxxx Xxxxx¸c˜ao, Xxxx Xxxxxx¸co, Xxxxxxxx Xxxxxxx, and Xxxxxxxxxx Xxxxxxx. Fast denser: Efficient deep neuroevolu- tion. In Xxxxx Xxxxxxxx, Xxxx Xx, Xxxx Xxxxxx¸co, Hen- drik Xxxxxxx, and Xxxxx Xxxx´ıa-Sa´nchez, editors, Genetic Programming, pages 197–212, Cham, 2019. Springer Inter- national Publishing. ISBN 978-3-030-16670-0.
[5] Xxxxx Xxxxx, Xxxxxxx Xxxxx, Xxxxxx Xxxx, and Xxxxxx Xxxxxx. Designing neural network architectures using rein- forcement learning. CoRR, abs/1611.02167, 2017.
[6] Xxxxxxx X. Xxxxxx, M´arcio P. Xxxxxxxxx, Andr´e X. X.
X. X. xx Xxxxxxxx, and Xxxx X. Freitas. Automatic de- sign of decision-tree algorithms with evolutionary algorithms. Evolutionary Computation, 21(4):659–684, November 2013. ISSN 1063-6560.
[7] Xxxxxxx X. Xxxxxx, Xxxxxx X. Xxxxxxxxx, Xxxx X. Xxxxxxx, and X. X. X. X. X. xx Xxxxxxxx. Evolutionary design of decision-tree algorithms tailored to microarray gene expres- sion data sets. IEEE Transactions on Evolutionary Compu- tation, 18(6):873–892, Dec 2014. ISSN 1089-778X.
[8] Xxxxxxx X. Xxxxxx, Xxxx´e X.X.X.X. xx Xxxxxxxx, and Xxxx X. Freitas. Automatic Design of Decision-Tree Induction Algo- rithms. Number 978-3-319-14231-9 in SpringerBriefs in Com- puter Science. Springer, February 2015.
[9] Xxxxxxx Xxxxxx, Xxxxxx-Xxx Xxxxxxxxxx, Xxxxxx Xxxx, Xxxxx Xxxxxxxxx, and Quoc Le. Understanding and simplifying one-shot architecture search. In Xxxxxxxx Xx and Xxxxxxx Xxxxxx, editors, Proceedings of the 35th International Xxx- xxxxxxx on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 549–000, Xxxxxxxxxxxxxx- xxx, Xxxxxxxxx Xxxxxx, 10–15 2018. PMLR.
[10] Y. Xxxxxx, X. Courville, and P. Xxxxxxx. Representation learning: A review and new perspectives. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 35(8): 1798–1828, Aug 2013. ISSN 0162-8828.
[11] Xxxxxx Xxxxxx. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1):1–127, January 2009. ISSN 1935- 8237.
[12] Xxxxx Xxxxxxx, Xxxxxxxxxx Xxxxxx-Xxxxxxx, Xxxxxx Xxxxxx, and Xxxxxxx Xxxxxxx. Metalearning: Applications to Data Mining. Springer, 1 edition, 2008. ISBN 3540732624, 9783540732624.
[13] Xxxxxx Xxxxx, Xxxxxxxx Xxx, Xxxxx X. Xxxxxxx, and Xxxx Xxxxxx. SMASH: one-shot model architecture search through hypernetworks. In International Conference on Learning Representations, 2018.
[14] Xxx Xxx, Xxxxxxxx Xxxx, Xxxxxx Xxxxx, Xxxx Xxx, and Xxxx Xx. Path-level network transformation for efficient architec- ture search. In Proceedings of the 35th International Confer- ence on Machine Learning, ICML 2018, Stockholmsma¨ssan, Stockholm, Sweden, July 10-15, 2018, 2018.
[15] Franc¸ois Chollet. Xception: Deep learning with depthwise separable convolutions. In 2017 IEEE Conference on Com- puter Vision and Pattern Recognition, CVPR 2017, Hon- olulu, HI, USA, July 21-26, 2017, pages 1800–1807, 2017.
[16] Xx Xxxx and Xxxx Xx. Deep Learning: Methods and Ap- plications. Now Publishers Inc., Hanover, MA, USA, 2014. ISBN 1601988141, 9781601988140.
[17] Xxxxxx Xxxxxxxxxxx, Xxxxxxx Xxxxxxx, Xxxx Xxxxxx Xxxxxxxx- xxxx, Xxxxxx X. Riedmiller, and Xxxxxx Xxxx. Discrimi- native unsupervised feature learning with exemplar convolu- tional neural networks. IEEE Trans. Pattern Anal. Mach. Intell., 38(9):1734–1747, 2016.
[18] Xxxxxx Xxxxxx, Xxx Xxxxxxx Xxxxxx, and Xxxxx Xxxxxx. Neural architecture search: A survey. J. Mach. Learn. Res., 20:55:1–55:21, 2019.
[19] Chrisantha Xxxxxxxx, Xxxxx Xxxxxxx, Xxxxxxx Xxxxxxxx, Xxxxxxxx Xxxxx, Xxxxx Xxxx, Xxx Xxxxxxxxx, Xxxx Xxxxxxx, and Xxxx Xxxxxxxx. Convolution by evolution: Differentiable pattern producing networks. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO ’16, pages 000–000, Xxx Xxxx, XX, XXX, 0000. ACM. ISBN 978-1-4503-4206-3.
[20] M. Xxxxxx, X. Xxxxx, X. Xxxxxxxxxxxx, X. Springenberg,
M. Xxxx, and F. Hutter. Methods for improving bayesian optimization for automl. In ICML 2015 AutoML Workshop, July 2015.
[21] Xxx Xxxxxxxxxx, Xxxxxx Xxxxxx, and Xxxxx Xxxxxxxxx. Deep Learning. MIT Press, 2016.
[22] Xxxxxxx Xxx, Xx Xxx, Xxx Xxxxxxxxx, Xxxxxxxx Xxx, Xxxx Xx, and Xxxxxxx X. Lew. Deep learning for visual under- standing: A review. Neurocomputing, 187:27 – 48, 2016. ISSN 0925-2312. Recent Developments on Deep Big Vision.
[23] K. He, X. Xxxxx, X. Xxx, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 770– 778, June 2016.
[24] Xxxxxxx Xx, Xxxxxxx Xxxxx, Xxxxxxxx Xxx, and Xxxx Xxx. Spatial pyramid pooling in deep convolutional networks for visual recognition. In Xxxxx Xxxxx, Xxxxx Xxxxxx, Xxxxx Xxxxxxx, and Xxxxx Xxxxxxxxxx, editors, Computer Vision – ECCV 2014, pages 346–361, Cham, 2014. Springer Interna- tional Publishing. ISBN 978-3-319-10578-9.
[25] Xxxxxxxx X. Xxxxxx, Xxxxx Xxxxxxxx, and Xxx-Xxxx Xxx. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, July 2006. ISSN 0899-7667.
[26] Xxxxxxx Xxxxxxx, Xxxxx Xxxx, and Xxxxxxx Xxxxxx. Evolution of deep belief neural network parameters for robot object recognition and grasping. Procedia Computer Science, 105: 153 – 158, 2017. ISSN 1877-0509. 2016 IEEE International Symposium on Robotics and Intelligent Sensors, XXXX 0000, 00-00 Xxxxxxxx 0000, Xxxxx, Xxxxx.
[27] G. Xxxxx, X. Xxx, X. v. d. Xxxxxx, and K. Q. Weinberger. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261–2269, July 2017.
[28] Xxxxxxx Xxxxxxxxx and Xxxxxxxxx Xxxxxxxxxxxx. Meta- Learning in Computational Intelligence, chapter Universal
Meta-Learning Architecture and Algorithms, pages 1–00. Xxxxxxxx Xxxxxx Xxxxxxxxxx, Xxxxxx, Xxxxxxxxxx, 0000. ISBN 978-3-642-20980-2.
[29] Xxxxxxx X. Gelbart Brandon Reagen Xxx Xxxxx Xxxx X. Whatmough Xxxxx Xxxxxx Jos´e Xxxxxx Xxxxxxxxxx-Xxxxxx, Xxxxxx Xxxx´xxxxx-Xxxxxx and Xx-Xxxx Xxx. A comparison of several models on hardware accelerator data for deep neu- ral networks design using bayesian optimization. In ICML 2018 AutoML Workshop, JMLR Workshop and Conference Proceedings. XXXX.xxx, 2018.
[30] Diederik P. Xxxxxx and Xxxxx Xx. Xxxx: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[31] Xxxx Xxxxxxxxxx, Xxxx Xxxxxxxxx, and Xxxxxxxx X. Hinton. Im- agenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neu- ral Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012. Curran Associates Inc.
[32] H. Xxxxxxxxxx, X. Xxxxxx, X. Louradour, and P. Lamblin. Ex- ploring strategies for training deep neural networks. Journal of Machine Learning Research, 10:1–40, 2009.
[33] Xxxxxx Xxx, Xxxxxx Xxxx, Xxxxxxxx Xxxxxx, Xxx Xxx, Xx-Xxx Xx, Xx Xxx-Xxx, Xxxx X. Xxxxxx, Xxxxxxxx Xxxxx, and Xxxxx Xxxxxx. Progressive neural architecture search. CoRR, abs/1712.00559, 2017.
[34] Xxxxxxx Xxx, Xxxxx Xxxxxxxx, Xxxxx Xxxxxxx, Xxxxxxxxxx Xxxxxxxx, and Xxxxx Xxxxxxxxxxx. Hierarchical representa- tions for efficient architecture search. CoRR, abs/1711.00436, 2017.
[35] Xxxxx Xxxxxxx Xxxxxxx and Xxxxx Xxxxxx. Memetic evolution of deep neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’18, pages
000–000, Xxx Xxxx, XX, XXX, 0000. ACM. ISBN 978-1-
0000-0000-0.
[36] Xxxxx Xxxxxxx Xxxxxxx, Xxxxx Xxxxxx, Xxxxxx Xxxxxxx, Xx- xxxxx Xxxxxxx Xxxxx, and Xxx´e Xxxxxxx Xxxxxx. Particle swarm optimization for hyper-parameter selection in deep neural networks. In Proceedings of the Genetic and Evolu- tionary Computation Conference, GECCO ’17, pages 000– 000, Xxx Xxxx, XX, XXX, 0000. ACM. ISBN 978-1-4503- 4920-8.
[37] Xxxx Xxxxxxxxxx and Xxxxx Xxxxxx. CMA-ES for hyper- parameter optimization of deep neural networks. CoRR, abs/1604.07269, 2016.
[38] A. Mart´ın, F. Xxxxxxx-Xxxxxxx, X. Xxxxxxx, and D. Cama- cho. Evolving deep neural networks architectures for android malware classification. In IEEE Congress on Evolutionary Computation (CEC), pages 1659–1666, 2017.
[39] RI Mckay, NX Hoai, PA Whigham, Y Xxxx, and X X Xxxxx. Grammar-based Genetic Programming: a survey. Genetic Programming and Evolvable Machines, 11(3):365–396, 2010.
[40] Xxxxx Xxxxxxxxxxxx, Xxxxx Xxx Xxxxx, Xxxxxx Xxxxxxxx, Xxxxxx Xxxxx, Xxxxxx Xxxx, Xxxxxxx Xxxxxxx, Xxxx Xxxx, Xxxxxx Xxxxxxxx, Xxxxxx Xxxxxxxxx, Xxxxx Xxxxx, and Xxxxx Xxx- xxx. Evolving deep neural networks. CoRR, abs/1703.00548, 2017.
[41] Xxxxxxx Xxxxxxxxxx Xxxxxx, Xxxxx Xxxxxx, Xxxxx Xxxxx, Xxxxxx X Xxxxxx, Xxxxxxxxx Xxxxxxx, and Xxxxxxx Xxxxxx. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature Com- munications, 9:2383, 2018.
[42] X X’Xxxxx and X Xxxx. Grammatical evolution. IEEE Trans- actions on Evolutionary Computation, 5(4):349–358, 2001.
[43] Xxxxx Xxxxxx, Xxxx Xxx, Xxxxxx Xxxx, Xxx Xxx, Xxxxxxxx Xxxx, Xxxxxxxxx Xx, Xxxx Xxxx, Xxx Xxxx, Xxxxxxx Xxxxx, Xxxx Xxxx, Xxxxxxx Xxx, Xxxxxx Xxxx, Xxxx Xxxxxx Xxx, Xxxxxxxx Xxxx, and Xxxxxx Xxxx. Deepid-net: multi-stage and deformable deep convolutional neural networks for ob- ject detection. CoRR, abs/1409.3505, 2014.
[44] Xxxxxx X. Xxxxx and Xxxx Xxxxxxx. Automating the Design of Data Mining Algorithms: An Evolutionary Computation Approach. Springer, 1st edition, 2009. ISBN 3642025404, 9783642025402.
[45] Xxxxx Xxxxxxxxx and Xxxxxx Xxxxxx. Lamarckian evolution of convolutional neural networks. CoRR, abs/1806.08099, 2018.
[46] Xxxxxxx Xxxx, Xxxxxx Xxxxx, Xxxxxx Xxxxx, Xxxxxxx Xxxxxx, Xxxxxx Xxxx Xxxxxxxx, Xxx Xxx, Quoc V. Le, and Xxxxxx Xxxxxxx. Large-scale evolution of image classifiers. In Pro- ceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 Au- gust 2017, pages 2902–2911, 2017.
[47] Xxxxxxx Xxxx, Xxxx Xxxxxxxx, Xxxxxxx Xxxxx, and Quoc V. Le. Regularized evolution for image classifier architecture search. CoRR, abs/1802.01548, 2018.
[48] Xxxxx Xxxxxxxx and Xxxxxx Xxxxxxxxx. Very deep convolu- tional networks for large-scale image recognition. In 3rd In- ternational Conference on Learning Representations, ICLR 0000, Xxx Xxxxx, XX, XXX, May 7-9, 2015, Conference Track Proceedings, 2015.
[49] X. X. Xxxxxxx, X. Xxxxx, X. Xxxxxx, and R. Miikkulainen. Designing neural networks through neuroevolution. Nature Machine Intelligence, 1(1):24–35, 2019.
[50] Xxxxxx Xxxxxxxx Xxxx, Xxxxxxxx Xxxxxxxx, Xxxxxxx Xxxxx, Xxxx Xxxxxx, Xxxxxxx X. Xxxxxxx, and Xxxx Xxxxx. Deep neuroevolution: Genetic algorithms are a competitive alter- native for training deep neural networks for reinforcement learning. CoRR, abs/1712.06567, 2017.
[51] Xxxxxxxx Xxxxxxxx, Xxxxxxxx Xxxxxxxxx, and Xxxxxxxx Xx- xxx. A genetic programming approach to designing convolu- tional neural network architectures. In Proceedings of the Ge- netic and Evolutionary Computation Conference, GECCO
’17, pages 000–000, Xxx Xxxx, XX, XXX, 0000. ACM. ISBN 978-1-4503-4920-8.
[52] Xxxxx Xxx, Xxxx Xxx, and Xxxxxxx Xxxxx. Evolving deep convolutional neural networks for image classification. CoRR, abs/1710.10741, 2017.
[53] C. Szegedy, Xxx Xxx, Yangqing Jia, P. Xxxxxxxx, X. Xxxx,
D. Xxxxxxxx, X. Xxxxx, X. Xxxxxxxxx, and X. Xxxxxxxxxx. Going deeper with convolutions. In 2015 IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 1–9, 2015. doi: 10.1109/CVPR.2015.7298594.
[54] Xxxxxxxxx Xxxxxxx, Xxxxxxx Xxxxxxxxx, Xxxxxx Xxxxx, Xxxxxxxx Xxxxxx, and Xxxxxxxx Xxxxx. Rethinking the incep- tion architecture for computer vision. In 2016 IEEE Confer- ence on Computer Vision and Pattern Xxxxxxxxxxx, XXXX 0000, Xxx Xxxxx, XX, XXX, June 27-30, 2016, pages 2818– 2826, 2016.
[55] Xxxxxxxxx Xxxxxxx, Xxxxxx Xxxxx, Xxxxxxx Xxxxxxxxx, and Xxxxxxxxx X. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 4278–4284, 2017.
[56] C. Xxxxxxxx, X. Xxxxxx, X. X. Xxxx, and K. Leyton-Brown. Auto-weka: Combined selection and hyperparameter opti- mization of classification algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, pages 847–855. ACM, 2013. ISBN 978-1-4503-2174-7.
[57] Xxxx X. Xxxxxx and Xxxxx X. Stracuzzi. Many-layered learn- ing. Neural Comput., 14(10):2497–2529, 2002. ISSN 0899- 7667.
[58] Xxxxxx Xxxxxxx, Xxxxxxx Xxxxx, and Xxxxxxxxx Xxxxxxxx. A survey on neural architecture search. CoRR, abs/1905.01392, 2019.
[59] L. Xie and X. Yuille. Genetic cnn. In 2017 IEEE Interna- tional Conference on Computer Vision (ICCV), pages 1388– 1397, Oct 2017. doi: 10.1109/ICCV.2017.154.
[60] Xxxxxxx Xxx, Xxxx X. Xxxxxxxx, Xxxxx Xxxxxxx, Xxxxxxx Tu, and Xxxxxxx He. Aggregated residual transformations for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pages 5987–5995, 2017.
[61] Xxxxx Xxxx, Xxxxx Xxxxx, Xxxx Xxxxxxxxxxxx, Xxxxxxx Xxxx, Xxxxx Xxxxxx, and Xxxxx Xxxxxx. Nas-bench-101: Towards reproducible neural architecture search. In ICML, volume 97 of Proceedings of Machine Learning Research, pages 7105– 7114. PMLR, 2019.
[62] Xxxxxx X. Xxxxx, Xxxxx X. Xxxx, Xxxxxx Xxxxxxxx, Xxxxxxx X. Xxxxxx, Xxxxxx X. Xxxxxxxxx, Xxxxxx X. Po- tok, Xxxxxx X. Xxxxxx, Xxxxxxx Xxxxxx, and Xxxxxxxx Xxxxxx. Evolving deep networks using hpc. In Proceedings of the Ma- chine Learning on HPC Environments, MLHPC’17, pages 7:1–7:7, Xxx Xxxx, XX, XXX, 0000. ACM. ISBN 978-1-4503- 5137-9.
[63] Xxxxxxx X. Zeiler and Xxx Xxxxxx. Stochastic pooling for regularization of deep convolutional neural networks. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Con- ference Track Proceedings, 2013.
[64] Xxxxxxx X. Xxxxx, Xxxxx Xxxxx, Xxxxxxxx X. Xxxxxx, and Xxxxx Xx. Lookahead optimizer: k steps forward, 1 step back. CoRR, abs/1907.08610, 2019.
[00] Xxxxxxxxx Xxxxx, Xxxxxxxx Xx, Xxxx Xxxxxx Xxx, and Xxxxx Xxx. Polynet: A pursuit of structural diversity in very deep networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3900–3908, 2017.
[66] Xxxxxx Xxxx, Xxxxx Xxxxxxxxx, Xxxxxxxx Xxxxxx, and Quoc V. Le. Learning transferable architectures for scalable image recognition. CoRR, abs/1707.07012, 2017.
Certificado de Conclusão
Identificação de envelope: E61169618C5D44D9A4A5085C9F808F82 Status: Concluído Assunto: DocuSign: 1912-31676_TermoCoopCham2019_RicardoCerri.pdf
Origem do Envelope:
Qtde Págs Documento: 28 Assinaturas: 8 Remetente do envelope:
Qtde Págs Certificado: 6 Rubrica: 0 Instituto Serrapilheira
Assinatura guiada: Ativado
Selo com ID do Envelope: Ativado
Fuso horário: (UTC-08:00) Hora do Pacífico (EUA e Canadá)
Xxx Xxxxxx xx Xxxxxxxx, 000
Xxx xx Xxxxxxx, Xxx xx Xxxxxxx 00000000 xxxxxxx.xxxxxx@xxxx.xxx.xx
Endereço IP: 200.142.127.110
Rastreamento de registros
Status: Original
26/06/2020 09:19:14
Portador: Instituto Serrapilheira xxxxxxx.xxxxxx@xxxx.xxx.xx
Local: DocuSign
Eventos de Signatários Assinatura Data/Hora
Xxxxxx Xxxxxx xxxxxxxxxxxxxx@xxxxx.xxx
Nível de Segurança: E-mail, Autenticação da conta (Nenhuma), Código de acesso
Adoção de assinatura: Estilo pré-selecionado Usando endereço IP: 177.195.48.99
Enviado: 26/06/2020 09:31:22
Reenviado: 28/06/2020 07:52:40
Visualizado: 29/06/2020 06:52:12
Assinado: 29/06/2020 06:52:36
Termos de Assinatura e Registro Eletrônico:
Aceito: 29/06/2020 06:52:12
ID: 375cde22-be70-4cff-ba2f-c6c3ea2e1488
Xxxxxxx Xxxxxx xxxxxxx.xxxxxx@xxxx.xxx.xx Instituto Serrapilheira
Nível de Segurança: E-mail, Autenticação da conta
(Nenhuma) Adoção de assinatura: Estilo pré-selecionado
Usando endereço IP: 200.142.127.110
Enviado: 26/06/2020 09:31:22
Visualizado: 26/06/2020 09:31:52
Assinado: 26/06/2020 09:31:59
Termos de Assinatura e Registro Eletrônico:
Não disponível através do DocuSign
Xxxx Xxxxxxxxx xxxx@xxxxxxxxxxxxx.xxx Director president
Nível de Segurança: E-mail, Autenticação da conta
(Nenhuma), Código de acesso Adoção de assinatura: Estilo pré-selecionado
Usando endereço IP: 191.205.201.139
Enviado: 26/06/2020 09:31:21
Visualizado: 26/06/2020 09:33:56
Assinado: 26/06/2020 09:34:01
Termos de Assinatura e Registro Eletrônico:
Aceito: 22/02/2018 23:54:40
ID: 071e680a-c6f6-4918-8c1e-35cabdc66ac6
xxxxxx Xxxxxxxxx xxxxxx.xxxxxxxxx@xxxx.xxx.xx
Nível de Segurança: E-mail, Autenticação da conta (Nenhuma), Código de acesso
Adoção de assinatura: Estilo pré-selecionado Usando endereço IP: 179.210.128.239
Enviado: 26/06/2020 09:31:22
Visualizado: 26/06/2020 12:23:52
Assinado: 26/06/2020 12:24:02
Termos de Assinatura e Registro Eletrônico:
Aceito: 26/06/2020 12:23:52
ID: 920384e9-1b13-4c3f-94da-c2b9f578b23c
Xxxxxx xx Xxxxxx et d'Xxxxxxxxx xxxxxx.xxxxxx@xxxx.xxx.xx Attorney
Nível de Segurança: E-mail, Autenticação da conta
(Nenhuma), Código de acesso Adoção de assinatura: Estilo pré-selecionado
Usando endereço IP: 179.218.199.183
Enviado: 26/06/2020 09:31:22
Visualizado: 26/06/2020 09:38:19
Assinado: 26/06/2020 09:38:42
Termos de Assinatura e Registro Eletrônico:
Aceito: 06/11/2019 11:19:35
ID: 937e7631-4e8b-4410-b598-202336871479
Xxxxxxx Xxxxx xxxxxxx@xxxxx.xxx
Nível de Segurança: E-mail, Autenticação da conta (Nenhuma), Código de acesso
Adoção de assinatura: Estilo pré-selecionado Usando endereço IP: 187.107.157.87
Enviado: 26/06/2020 09:31:23
Visualizado: 26/06/2020 09:47:25
Assinado: 26/06/2020 09:58:21
Termos de Assinatura e Registro Eletrônico:
Aceito: 26/06/2020 09:47:25
ID: e25aefb0-9678-4bd2-ba15-dea1098f75e0
Xxxxxxx Xxxx xxxxxxx.xxxx@xxxxxxx.xxx.xx Fundação Xxxxxx Xxxxxxxxx
Nível de Segurança: E-mail, Autenticação da conta
(Nenhuma), Código de acesso Adoção de assinatura: Estilo pré-selecionado
Usando endereço IP: 177.128.109.214
Enviado: 26/06/2020 09:31:22
Visualizado: 26/06/2020 10:57:37
Assinado: 26/06/2020 10:58:04
Termos de Assinatura e Registro Eletrônico:
Aceito: 14/12/2018 10:50:28
ID: 5747104c-9200-46c9-b319-9f93aa13a1dc
Xxxxx Xxxxxxxxx Xxxxxxx Xxxxxxxx xxxxx@xxxxxx.xx
Nível de Segurança: E-mail, Autenticação da conta (Nenhuma), Código de acesso
Termos de Assinatura e Registro Eletrônico:
Aceito: 20/08/2019 13:13:39
ID: 71f068d0-e7a7-40a1-a254-f7998dae11c0
Adoção de assinatura: Estilo pré-selecionado Usando endereço IP: 189.103.17.175
Enviado: 26/06/2020 09:31:23
Reenviado: 28/06/2020 07:52:40
Reenviado: 30/06/2020 11:31:08
Reenviado: 03/07/2020 05:24:11
Reenviado: 20/07/2020 13:15:16
Visualizado: 26/07/2020 10:52:16
Assinado: 26/07/2020 10:53:00
Eventos de Signatários Presenciais | Assinatura | Data/Hora |
Eventos de Editores | Status | Data/Hora |
Eventos de Agentes | Status | Data/Hora |
Eventos de Destinatários Intermediários | Status | Data/Hora |
Eventos de entrega certificados | Status | Data/Hora |
Eventos de cópia | Status | Data/Hora |
Eventos com testemunhas | Assinatura | Data/Hora |
Eventos do tabelião | Assinatura | Data/Hora |
Envelope enviado | Com hash/criptografado | 20/07/2020 13:15:17 |
Entrega certificada | Segurança verificada | 26/07/2020 10:52:16 |
Assinatura concluída | Segurança verificada | 26/07/2020 10:53:00 |
Concluído | Segurança verificada | 26/07/2020 10:53:00 |
Eventos de pagamento | Status | Carimbo de data/hora |
Termos de Assinatura e Registro Eletrônico |
Termos de Assinatura e Registro Eletrônico criado em: 16/02/2018 09:45:17
Partes concordam em: Xxxxxx Xxxxxx, Xxxx Xxxxxxxxx, isabel Domingues, Xxxxxx xx Xxxxxx et d'Xxxxxxxxx, Xxxxxxx Xxxxx, Xxxxxxx Xxxx, Xxxxx Xxxxxxxxx Xxxxxx
CONSUMER DISCLOSURE
From time to time, Instituto Serrapilheira (we, us or Company) may be required by law to provide to you certain written notices or disclosures. Described below are the terms and conditions for providing to you such notices and disclosures electronically through the DocuSign, Inc. (DocuSign) electronic signing system. Please read the information below carefully and thoroughly, and if you can access this information electronically to your satisfaction and agree to these terms and conditions, please confirm your agreement by clicking the ‘I agree’ button at the bottom of this document.
Getting paper copies
At any time, you may request from us a paper copy of any record provided or made available electronically to you by us. You will have the ability to download and print documents we send to you through the DocuSign system during and immediately after signing session and, if you elect to create a DocuSign signer account, you may access them for a limited period of time (usually 30 days) after such documents are first sent to you. After such time, if you wish for us to send you paper copies of any such documents from our office to you, you will be charged a
$0.00 per-page fee. You may request delivery of such paper copies from us by following the procedure described below.
Withdrawing your consent
If you decide to receive notices and disclosures from us electronically, you may at any time change your mind and tell us that thereafter you want to receive required notices and disclosures only in paper format. How you must inform us of your decision to receive future notices and disclosure in paper format and withdraw your consent to receive notices and disclosures electronically is described below.
Consequences of changing your mind
If you elect to receive required notices and disclosures only in paper format, it will slow the speed at which we can complete certain steps in transactions with you and delivering services to you because we will need first to send the required notices or disclosures to you in paper format, and then wait until we receive back from you your acknowledgment of your receipt of such paper notices or disclosures. To indicate to us that you are changing your mind, you must withdraw your consent using the DocuSign ‘Withdraw Consent’ form on the signing page of a DocuSign envelope instead of signing it. This will indicate to us that you have withdrawn your consent to receive required notices and disclosures electronically from us and you will no longer be able to use the DocuSign system to receive required notices and consents electronically from us or to sign electronically documents from us.
All notices and disclosures will be sent to you electronically
Unless you tell us otherwise in accordance with the procedures described herein, we will provide electronically to you through the DocuSign system all required notices, disclosures, authorizations, acknowledgements, and other documents that are required to be provided or made available to you during the course of our relationship with you. To reduce the chance of you inadvertently not receiving any notice or disclosure, we prefer to provide all of the required notices and disclosures to you by the same method and to the same address that you have given us. Thus, you can receive all the disclosures and notices electronically or in paper format through the paper mail delivery system. If you do not agree with this process, please let us know as described below. Please also see the paragraph immediately above that describes the consequences of your electing not to receive delivery of the notices and disclosures electronically from us.
How to contact Instituto Serrapilheira:
You may contact us to let us know of your changes as to how we may contact you electronically, to request paper copies of certain information from us, and to withdraw your prior consent to receive notices and disclosures electronically as follows:
To contact us by email send messages to: xxxxxx@xxxxxxxxxxxxx.xxx
To advise Instituto Serrapilheira of your new e-mail address
To let us know of a change in your e-mail address where we should send notices and disclosures electronically to you, you must send an email message to us at xxxxxx@xxxxxxxxxxxxx.xxx and in the body of such request you must state: your previous e-mail address, your new e-mail
address. We do not require any other information from you to change your email address..
In addition, you must notify DocuSign, Inc. to arrange for your new email address to be reflected in your DocuSign account by following the process for changing e-mail in the DocuSign system.
To request paper copies from Instituto Serrapilheira
To request delivery from us of paper copies of the notices and disclosures previously provided by us to you electronically, you must send us an e-mail to xxxxxx@xxxxxxxxxxxxx.xxx and in the body of such request you must state your e-mail address, full name, US Postal address, and telephone number. We will bill you for any fees at that time, if any.
To withdraw your consent with Instituto Serrapilheira
To inform us that you no longer want to receive future notices and disclosures in electronic format you may:
i. decline to sign a document from within your DocuSign session, and on the subsequent page, select the check-box indicating you wish to withdraw your consent, or you may;
ii. send us an e-mail to xxxxxx@xxxxxxxxxxxxx.xxx and in the body of such request you must state your e-mail, full name, US Postal Address, and telephone number. We do not need any other information from you to withdraw consent.. The consequences of your withdrawing consent for online documents will be that transactions may take a longer time to process..
Required hardware and software
Operating Systems: | Windows® 2000, Windows® XP, Windows Vista®; Mac OS® X |
Browsers: | Final release versions of Internet Explorer® 6.0 or above (Windows only); Mozilla Firefox 2.0 or above (Windows and Mac); Safari™ 3.0 or above (Mac only) |
PDF Reader: | Acrobat® or similar software may be required to view and print PDF files |
Screen Resolution: | 800 x 600 minimum |
Enabled Security Settings: | Allow per session cookies |
** These minimum requirements are subject to change. If these requirements change, you will be asked to re-accept the disclosure. Pre-release (e.g. beta) versions of operating systems and browsers are not supported.
Acknowledging your access and consent to receive materials electronically
To confirm to us that you can access this information electronically, which will be similar to other electronic notices and disclosures that we will provide to you, please verify that you were able to read this electronic disclosure and that you also were able to print on paper or electronically save this page for your future reference and access or that you were able to e-mail this disclosure and consent to an address where you will be able to print on paper or save it for your future reference and access. Further, if you consent to receiving notices and disclosures exclusively in electronic format on the terms and conditions described above, please let us know by clicking the ‘I agree’ button below.
By checking the ‘I agree’ box, I confirm that:
• I can access and read this Electronic CONSENT TO ELECTRONIC RECEIPT OF ELECTRONIC CONSUMER DISCLOSURES document; and
• I can print on paper the disclosure or save or send the disclosure to a place where I can print it, for future reference and access; and
• Until or unless I notify Instituto Serrapilheira as described above, I consent to receive from exclusively through electronic means all notices, disclosures, authorizations, acknowledgements, and other documents that are required to be provided or made available to me by Instituto Serrapilheira during the course of my relationship with you.