A evolução da complexidade dos circuitos integrados tornou impossível aos seres humanos a tarefa de projetar um hardware sem o auxílio de softwares. Esta apresentação fornece uma visão em alto nível dos principais desafios enfrentados por estes softwares e as soluções atuais para estes problemas.
Non-Fungible Tokens (NFTs) are units of data stored on a blockchain that certifies a digital asset to be unique and therefore not interchangeable, while offering a unique digital certificate of ownership. Public attention towards NFTs has exploded in 2021, when their market has experienced record sales. For long, little was known about the overall structure and evolution of its market. To shed some light on its dynamics, we collected data concerning 6.1 million trades of 4.7 million NFTs between June 2017 and April 2021 to study the statistical properties of the market and to gauge the predictability of NFT prices. We also studied the properties of the digital items exchanged on the market to find that the emerging norms of NFT valuation thwart the non-fungibility properties of NFTs. In particular, rarer NFTs: (i) sell for higher prices, (ii) are traded less frequently, (iii) guarantee higher returns on investment (ROIs), and (iv) are less risky, i.e., less prone to yield negative returns.
Luca Maria Aiello
Associate Professor at the IT University of Copenhagen, Denmark
In the first part we cover five current specific problems that motivate the needs of responsible AI: (1) discrimination (e.g., facial recognition, justice, sharing economy, language models); (2) phrenology (e.g., biometric based predictions); (3) unfair digital commerce (e.g., exposure and popularity bias); (4) stupid models (e.g., minimal adversarial AI) and (5) indiscriminate use of computing resources (e.g., large language models). These examples do have a personal bias but set the context for the second part where we address four challenges: (1) too many principles (e.g., principles vs. techniques), (2) cultural differences; (3) regulation and (4) our cognitive biases. We finish discussing what we can do to address these challenges in the near future to be able to develop responsible AI.
Ricardo Baeza-Yates is Director of Research at the Institute for Experiential AI of Northeastern University. Before, he was VP of Research at Yahoo Labs, based in Barcelona, Spain, and later in Sunnyvale, California, from 2006 to 2016. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. From 2002 to 2004 he was elected to the Board of Governors of the IEEE Computer Society and between 2012 and 2016 was elected for the ACM Council. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow, among other awards and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989, and his areas of expertise are web search and data mining, information retrieval, bias on AI, data science and algorithms in general.
In this talk I’ll present an overview of the challenges and opportunities for applying data mining and machine learning for tasks in personalized health, including the role of semantics. In particular, I’ll focus on the task of healthy recipe recommendation via the use of knowledge graphs, as well as generating summaries from personal health data, highlighting our work within the RPI-IBM Health Empowerment by Analytics, Learning, and Semantics (HEALS) project.
Mohammed J. Zaki is a Professor and Department Head of Computer Science at RPI. He received his Ph.D. degree in computer science from the University of Rochester in 1998. His research interests focus novel data mining and machine learning techniques, particularly for learning from graph structured and textual data, with applications in bioinformatics, personal health and financial analytics. He has around 300 publications (and 6 patents), including the Data Mining and Machine Learning textbook (2nd Edition, Cambridge University Press, 2020). He founded the BIOKDD Workshop, and recently served as PC chair for CIKM’22. He currently serves on the Board of Directors for ACM SIGKDD. He was a recipient of the NSF and DOE Career Awards. He is a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the AAAS.
LLVM é um conjunto de bibliotecas e ferramentas que facilitam o desenvolvimento de linguagens de programação. Várias linguagens populares hoje são construídas e compiladas via LLVM: C, C++, Rust e Julia, por exemplo. LLVM define uma representação intermediária de código (uma linguagem de montagem). Ao traduzir uma linguagem de alto nível para este código intermediário, tem-se acesso a uma vasta gama de análises estáticas e otimizações que já estão disponíveis em LLVM. Nessa palestra veremos como usar LLVM como uma ferramenta para compilar e visualizar programas, escreveremos código na representação intermediária, e desenvolveremos uma análise de código que pode ser acoplada àquela infra-estrutura.
Apesar de vários esforços para detecção e combate à desinformação online, as campanhas de fake news, em particular em plataformas de mídia social, permanecem um problema com grande impacto nas sociedades. Nós argumentamos que para desenvolver soluções efetivas para o combate à desinformação é essencial entender (analisar e modelar) como a informação é propagada, frequentemente cruzando os limites de diferentes plataformas, e atingindo uma grande audiência. Nesta palestra, eu irei discutir alguns dos desafios principais para o combate à desinformação online a apresentar resultados recentes do nosso grupo de pesquisa sobre a análise de disseminação de fake news. Nossos resultados abordam aspectos relacionados ao conteúdo, dinâmica de propagação e à rede de disseminação de informação, bem como características dos usuários, enquanto seres humanos, que mais contribuem para o espalhamento de desinformação na Web.