Categories
Cloud Hosting

Top 10 Middle East IT stories of 2022 – ComputerWeekly.com

This year has seen the Middle East region host one of the worlds biggest sporting events for the first time, when the FIFA World Cup arrived in Qatar in November.

Not only did the oil-rich nation face massive construction challenges, with stadiums and other physical infrastructure needed to host such a large and prestigious event, but it also had to be ready for inevitable cyber attacks.

Cyber security features heavily in this yearly review, with analysis of projects in the United Arab Emirates (UAE) and Saudi Arabia.

Hosting major sporting events might be something countries in the Middle East aspire to do more often as they diversify their economies and reduce their reliance on oil revenues. This top 10 also features articles about some of the new industries being created in the region, the huge sums being invested, as well as some of the challenges being faced.

Here are Computer Weeklys top 10Middle East IT stories of 2022.

Qatar hosts the FIFA World Cup this year the first time the event has been staged in the Arab world. Cyber security experts in the country predicted that ticketing, hotel bookings and restaurant reservations would be faked by hackers to capture personal data from people travelling to Qatar.

Also, phishing and social engineering was expected to be used to steal personal and financial information from anyone using the internet to get information about the tournament.

Saudi Arabias job market is largely shaped by the push for Saudization, a colloquial term for a movement that is officially called nationalisation.

Part of this push is a set of regulations called Nitaqat, which falls under the jurisdiction of the Ministry of Labour and Social Development, and requires organisations operating in Saudi Arabia to maintain certain percentages of Saudi nationals in their workforce.

A group of Google workers and Palestinian rights activists are calling on the tech giant to end its involvement in the secretive Project Nimbus cloud computing contract, which involves the provision of artificial intelligence and machine learning tools to the Israeli government.

Calls for Google to end its involvement in the contract follow claims made by Ariel Koren, a product marketing manager at Google for Education since 2015 and member of the Alphabet Workers Union, that she was pressured into resigning as retaliation for her vocal opposition to the deal.

A survey has revealed that UAE residents believe 3D printing technology will become widespread in the country, and expect it to have the most positive impact on society.

The online survey of more than 1,000 UAE citizens, carried out by YouGov, asked them for their opinions on 16 emerging technologies. According to YouGov: Data shows that of all the 16 listed technologies, UAE residents have most likely heard a lot about or have some awareness of cryptocurrency, virtual reality, self-driving cars and 3D printing.

The distinction between protecting information technology and protecting operational technology (OT) became very clear in 2010, when the Iranian nuclear enrichment facility Natanz was attacked by Stuxnetmalware.

OT includes programmable logic controllers, intelligent electronic devices, human-machine interfaces and remote terminal units that allow humans to operate and run an industrial facility using computer systems.

In a region that is experiencing an unprecedented increase in cyber security threats, the UAE is taking actions that are already paying off.

The increase in threats is described in the State of the market report 2021 and the State of the market report 2022 annual reports published by Help AG. These studies focus exclusively on digital security in the Middle East, highlighting the top threats and the sectors most impacted, and providing advice on where companies should invest their resources.

In September 2021, the Abu Dhabi Department of Health announced that it would create a drone delivery system to be used to deliver medical supplies medicine, blood units, vaccines and samples between laboratories, pharmacies and blood banks across the city.

The first version of the system will be based on a network of 40 different stations that drones fly in and out of. Over time, the number of stations is expected to grow.

Middle East-based IT leaders expect IT budgets for 2022 to be equal to, or above, pre-pandemic levels, with security spending expected to take the biggest share.

According to this years TechTarget/Computer Weekly annual IT Priorities survey, 63% of IT decision-makers in the Middle East region are planning to increase their IT budgets by 5% or more in 2022.

Accenture is to head up a consortium to develop and support a national payments infrastructure in the UAE that will enable next-generation payments.

Alongside suppliers G42 and SIA, the digital payments arm of Nexi Group, Accenture was selected by the Central Bank of the UAE to build and operate the UAEs National Instant Payment Platform over the next five years.

Saudi Arabia is investing $6.4bn in the digital technologies of the future and the tech startups that will harness them.

The announcement was made during a major new tech event, known as LEAP, in the Saudi capital Riyadh.

Go here to see the original:

Top 10 Middle East IT stories of 2022 - ComputerWeekly.com

Categories
Cloud Hosting

Potential cloud protests and maybe, finally, more JADC2 jointness … – Breaking Defense

Pentagon grapples with growth of artificial intelligence. (Graphic by Breaking Defense, original brain graphic via Getty)

WASHINGTON After military information technology and cybersecurity officials ring in the new year, theyll be coming back to interesting challenges in an alphabet soup of issues: JWCC, JADC2 and CDAO, to name a few.

Of all the things that are likely to happen in the network and cyber defense space, those are three key things Im keeping an especially close eye on in 2023. Heres why:

[This article is one of many in a series in which Breaking Defense reporters look back on the most significant (and entertaining) news stories of2022and look forward to what2023may hold.]

Potential JWCC Protests

On Dec. 7, the Pentagon awarded Amazon Web Services, Google, Microsoft and Oracle each a piece of the $9 billion Joint Warfighting Cloud Capability contract after sending the companies direct solicitations back in November.

Under the effort, the four vendors will compete to get task orders. Right now, its unclear when exactly the first task order will be rolled out or how many task orders will be made.

Its also possible that just like the Joint Enterprise Defense Infrastructure contract, JWCC could be mired in legal disputes, particularly when it comes to which vendor gets what task order.

As you know, with any contract, a protest is possible, Lt. Gen. Robert Skinner, director of the Defense Information Systems Agency, told reporters Dec. 8 following the JWCC awards. What we really focused on was, Here are the requirements that the department needs. And based on those requirements, we did an evaluation, we did market research, we did evaluation to see whichUS-based [cloud service providers] were able to meet those requirements The decision based on whether theres a protest or not really didnt play into it because we want to focus on the requirements and who could meet those requirements.

Sharon Woods, director of DISAs Hosting and Compute Center, said at the same briefing that under the acquisition rules, the task orders, theres a $10 million threshold and a $25 million threshold on protests.

So its really dependent on how large the task order is, she added.

If there is a protest, the DoD could potentially see delays in a critical program its been trying to get off the ground for years now.

A New Office To Oversee JADC2

After a year of a lot of back and forth about the Pentagons Joint All Domain Command and Control effort to better connect sensors to shooters, a new office has been stood up with the aim of bringing jointness to the infamously nebulous initiative.

In October, DoD announced the creation of the Acquisition, Integration and Interoperability Office housed within the Office of the Secretary of Defense. Dave Tremper, director of electronic warfare in Office of the Undersecretary of Defense for Acquisition and Sustainment, will lead the office, and the first task will be finding how to truly get JADC2 across the department, Chris ODonnell, deputy assistant secretary of defense for platform and weapon portfolio management in OUSD (A&S), said Oct. 27.

The creation of the office came a few months after Deputy Defense Secretary Kathleen Hicks said she wanted more high-level oversight of JADC2 and following complaints from military service officials.

Tracking The CDAO

Itll be interesting to see what the new Chief Digital and Artificial Intelligence Officer Craig Martell and his office will accomplish over the next year. Martell, a former Lyft exec, was tapped as the Pentagons first CDAO earlier in 2022.

As CDAO, Martell has some big responsibilities and cant pull on any prior Pentagon experience. When the CDAO officially stood up June 1, the office absorbed the Joint AI Center, Defense Digital Service and Office of Advancing Analytics all key parts of the Pentagons technology network. And there are plans to permit the chief data officer to report directly to the CDAO. (The CDO is operationally aligned to the office and has been rolled into one of its directorates, according to an internal DoD memorandum that was obtained by Breaking Defense in May.)

Already Martells priorities have slightly shifted: He initially thought his job would entail producing tools for DoD to do modeling, but over the first few months on the job, theres been a focus on driving high quality data. During his remarks at the DIA DoDIIS Worldwide Conference Dec. 13, Martell said what most people think and demand of artificial intelligence is magical pixie dust.

What theyre really saying is, excuse my language, Damn, I have a really hard problem and wouldnt it be awesome if a machine could solve it for me? he said. But what we really can deliver in lieu of that because Im here to tell you that we cant deliver magical pixie dust, sorry but what we can deliver is really high quality data.

Martell is also working to further other DoD efforts like zero trust, the Joint Warfighting Cloud Capability and JADC2. The Pentagon has set an ambitious goal of implementing zero trust across the department by 2027 and released a zero-trust strategy in November. The question remains as to what exactly a full implementation of zero trust will look like.

See the rest here:

Potential cloud protests and maybe, finally, more JADC2 jointness ... - Breaking Defense

Categories
Dedicated Server

HostColor.com Ends 2022 With 29 Locations For Delivery of Cloud … – Benzinga

HostColor.com (HC), has reported to the technology media that it ends 2022 with 29 Virtual Data Centers used for delivering Cloud infrastructure services. As of December 2022, the company delivers Hosted Private Cloud and Public Cloud Server services based on VMware ESXi, Proxmox VE, and Linux Containers' virtualization technologies, and 10Gbps Dedicated Servers from the following data center locations:

Localization of the Cloud services & More Bandwidth At Lower Costs

HostColor announced in November 2022 its Cloud infrastructure service priorities for 2023 - "Localization of the Cloud services" and "Increased bandwidth rate at fixed monthly cost". The company has also said that one of its major business goals for 2023 is to help SMBs take control of their IT infrastructure in a cloud service market, characterized by increasing cloud lock-in, imposed by Big Tech and the major cloud providers.

SMBs To Take Control Of Their IT Infrastructure?

"There are two simultaneously developing trends in the Cloud service market - a growing pressure on the smaller and medium IT infrastructure providers by the leading hyperscalers (compute clouds), and a growing dependence of Users of cloud services from the same those big major clouds. The Users' dependence comes to a point of de-facto cloud lock-in," says HostColor.com founder and CEO Dimitar Avramov. He adds that the biggest cloud infrastructure providers impose complex contractual and pricing terms and procedures that make transitioning data and services to another vendor's platform difficult and very costly.

"As a result of the hyperscalers' policies the cloud service users are highly dependent (locked-in) on a single corporate cloud platform. When it comes to the structure of the services and billing, the business models of the major technology clouds feature a complete lack of transparency. All this results in significant loss of money for SMBs that vary from a couple of thousands to millions of dollars on annual basis, depending on the cloud services they use." explains HostColor's executive. He adds that his company is determined to raise users' awareness about the cloud lock-in and to help as many business owners as it can, to move out their IT infrastructures from the major hyperscalers to smaller and medium cloud service providers.

Cloud computing experts have been long ringing the bell that the vendor lock-in in the cloud is real.

David Linthicum says in an article published at InfoWorld on July 2, 2021, that "Cloud-native applications have built-in dependencies on their cloud hosts, such as databases, security, governance, ops tools, etc." and that "It's not rocket science to envision the day when a cloud-native application needs to move from one cloud to another. It won't be easy."

In a publication in CIO.com titled "10 dark secrets of the cloud", the author Peter Wayner, warns Cloud Users "You're locked in more than you think" and adds that "Even when your data or the services you create in the cloud are theoretically portable, simply moving all those bits from one company's cloud to another seems to take quite a bit of time." Mr. Wayner also says that Uses of the major hyper-scalers are "paying a premium - even if it's cheap" and that performance of the major clouds "isn't always as advertised".

Internal research conducted by HostColor.com between 2019 - 2022 examines the terms of services, pricing, and the Cloud IaaS models of the five biggest cloud infrastructure providers. The research shows that their cloud service terms and pricing models feature a high level of opacity. This results in significant loss of money for their users that vary from a couple of thousands to hundreds of thousands of dollars on annual basis, depending on the services they use.

About HostColor

HostColor.com ( https://www.hostcolor.com ) is a global IT infrastructure and Web Hosting service provider since 2000. The company has its own virtual data centers, a capacity for provisioning dedicated servers and colocation services in 50 data centers worldwide. Its subsidiary HCE ( https://www.hostcoloreurope.com ) operates Cloud infrastructure and delivers dedicated hosting services in 19 European counties.

Release ID: 478856

Excerpt from:

HostColor.com Ends 2022 With 29 Locations For Delivery of Cloud ... - Benzinga

Categories
Dedicated Server

Top Web Hosting and VPS Services Reviewed – Digital Journal

Web hosting refers to the practice of hosting a website on a server so that it can be accessed by users over the internet. There are several types of web hosting options available, including shared hosting, virtual private server (VPS) hosting, and dedicated server hosting.

Shared hosting is the most basic and affordable type of web hosting. It involves sharing a single physical server and its resources with multiple websites. This means that each website shares the same CPU, RAM, and disk space as other websites on the server. Shared hosting is suitable for small websites with low traffic and limited resources.

VPS hosting, on the other hand, provides a more isolated and secure environment for hosting a website. In VPS hosting, a single physical server is divided into multiple virtual servers, each with its own resources and operating system. This allows each website to have its own dedicated resources, making it more performant and scalable than shared hosting. VPS hosting is a good option for websites with moderate traffic and resource requirements.

Dedicated server hosting is the most powerful and expensive type of web hosting. In this type of hosting, a single website is hosted on a physical server that is dedicated solely to it. This means that the website has access to all of the servers resources and is not sharing them with any other websites. Dedicated server hosting is suitable for large websites with high traffic and resource demands.

Cloud hosting is a type of web hosting that involves hosting a website on a network of virtual servers, which are distributed across multiple physical servers. This allows for greater scalability and flexibility, as the resources of the virtual servers can be easily adjusted to meet the changing needs of the website.

One of the main advantages of cloud hosting is its scalability. With traditional web hosting, if a website experiences a sudden increase in traffic, it may run out of resources and become slow or unavailable. With cloud hosting, the website can easily scale up its resources to meet the increased demand. This is done by adding more virtual servers to the network or increasing the resources of existing virtual servers.

Another advantage of cloud hosting is its reliability. With traditional web hosting, if a physical server goes down, the websites hosted on it will also be unavailable. With cloud hosting, the virtual servers are distributed across multiple physical servers, so if one server goes down, the other servers can continue to serve the website, ensuring that it remains available.

Cloud hosting is also generally more flexible than traditional web hosting, as it allows for the creation of custom configurations and the use of multiple operating systems. It also often includes additional features such as load balancing, automated backups, and monitoring.

Overall, cloud hosting is a good option for websites that require high scalability, reliability, and flexibility. Its often used by large websites with high traffic and resource demands, such as e-commerce websites and enterprise applications. However, it can also be a good choice for smaller websites that want to take advantage of the scalability and reliability of the cloud. We also recommend reading cloud hosting as well as WordPress hosting on CaveLions.

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Top Web Hosting and VPS Services Reviewed

Read more:

Top Web Hosting and VPS Services Reviewed - Digital Journal

Categories
Dedicated Server

Tachyum Celebrates 2022 and Announces 2023 Series C and … – Business Wire

LAS VEGAS--(BUSINESS WIRE)--Tachyum ended 2022 with accomplishments including the worldwide debut of Prodigy, the worlds first universal processor for high-performance computing and more than a dozen commercialization partnerships, effectively moving the startup to a leadership position in semiconductors.

2022 marked the introduction of Tachyums Prodigy to the commercial market. Prodigy exceeded its performance targets and is significantly faster than any processors currently available in hyperscale, HPC and AI markets. With its higher performance and performance per-dollar and per-watt, Tachyums Prodigy processor will enable the worlds fastest AI supercomputer, currently in planning stages.

Tachyum signed 14 significant MOUs with prestigious universities, research institutes, and innovative companies like the Faculty of Information Technology at Czech Technical University in Prague, Kempelen Institute of Intelligent Technologies, M Computers, Picacity, LuxProvide S.A. (Meluxina supercomputer), Mat Logica, and Cologne Chip. Other agreements are in progress.

Technical Achievements

The launch of Prodigy followed the successful preproduction and Quality Assurance (QA) phases for hardware and software testing on FPGA emulation boards, and achievements in demonstrating Prodigys integration with major platforms to address multiple customer needs. These included FreeBSD, Security-Enhanced Linux (SELinux), KVM (Kernel-based Virtual Machine) hypervisor virtualization, and native Docker under the Go programming language (Golang).

Software ecosystem enhancements also included improvements to Prodigys Unified Extensible Firmware Interface (UEFI) specification-based BIOS (Basic Input Output System) replacement firmware, incorporating the latest versions of the QEMU emulator and GNU Compiler Collection (GCC). These improvements allow quick and seamless integration of data center technologies into Tachyum-based environments.

Tachyum completed the final piece of its core software stack with a Baseboard Management Controller (BMC) running on a Prodigy emulation system. This enables Tachyum to provide OEM/ODM and system integrators with complete software and firmware stack, and serves as a key component of the upcoming Tachyum Prodigy 4 socket reference design.

In its hardware accomplishments, Tachyum built its IEEE-compliant floating-point unit (FPU) from the ground upone of the most advanced in the world, with the highest clock speedsand progressed to running applications in Linux interactive mode on Prodigy FPGA hardware with SMP (Symmetric Multi-Processing) Linux and the FPU. This proved the stability of the system and allowed Tachyum to move forward with additional testing. It completed LINPACK benchmarks using Prodigys FPU on a FPGA. LINPACK measures a systems floating-point computing power by solving a dense system of linear equations to determine performance. It is a widely used benchmark for supercomputers.

The company published three technical white papers that unveiled never-before-disclosed architectural designs of the system-on-chip (SOC) and AI training techniques, revealing how Prodigy addresses trends in AI, enables deep learning workloads that are more environmentally responsible with lower energy consumption and reduced carbon emissions. One paper defined a groundbreaking high-performance, low-latency, low-cost, low-power, highly scalable exascale-flattened networking solution that provides a superior alternative to the more expensive, proprietary and limited scalability InfiniBand communications standard.

Around the world

Tachyum was a highlight of exhibits at Expo 2020 Dubai with the world premiere of the Prodigy Universal Processor for supercomputers, and presented Prodigy at LEAP22 in Riyadh, Saudi Arabia. Tachyum was named one of the Most Innovative AI Solutions Providers to watch by Enterprise World. Company executives were among the featured presenters at ISC High Performance 2022 and Supercomputing 2022 events.

Looking forward

With its Series C funding, expected to close in 2023, Tachyum will finance the volume production of Prodigy Universal Processor Chip and be positioned for sustained profitability, as well as increase headcount.

2023 will see the company move to tape-out, silicon samples, production, and shipments. After running LINPACK benchmarks using Prodigys FPU on a FPGA there are only four more steps to go before the final netlist of Prodigy: running UEFI and boot loaders loading Linux on the FPGA, completing vector-based LINPACK testing with I/O, followed by I/O with virtualization, RAS (Reliability, Availability and Serviceability).

Prodigy delivers unprecedented data center performance, power, and economics, reducing CAPEX and OPEX significantly. Because of its utility for both high-performance and line-of-business applications, Prodigy-powered data center servers can seamlessly and dynamically switch between workloads, eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization. Tachyum's Prodigy integrates 128 high-performance custom-designed 64-bit compute cores, to deliver up to 4x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

Follow Tachyum

https://twitter.com/tachyum https://www.linkedin.com/company/tachyum https://www.facebook.com/Tachyum/

About Tachyum

Tachyum is transforming AI, HPC, public and private cloud data center markets with its recently launched flagship product. Prodigy, the worlds first Universal Processor, unifies the functionality of a CPU, a GPU, and a TPU into a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When Prodigy processors are provisioned in a hyperscale data center, they enable all AI, HPC, and general-purpose applications to run on one hardware infrastructure, saving companies billions of dollars per year. With data centers currently consuming over 4% of the planets electricity, predicted to be 10% by 2030, the ultra-low power Prodigy Universal Processor is critical to continue doubling worldwide data center capacity every four years. Tachyum, co-founded by Dr. Radoslav Danilak is building the worlds fastest AI supercomputer (128 AI exaflops) in the EU based on Prodigy processors. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.

Continue reading here:

Tachyum Celebrates 2022 and Announces 2023 Series C and ... - Business Wire

Categories
Dedicated Server

Could ‘Peer Community In’ be the revolution in scientific publishing … – Gavi, the Vaccine Alliance

In 2017, three researchers from the National Research Institute for Agriculture, Food and the Environment (INRAE), Denis Bourguet, Benoit Facon and Thomas Guillemaud, founded Peer Community In (PCI), a peer-review-based service for recommending preprints (referring to the version of an article that a scientist submits to a review committee). The service greenlights articles and makes them and their reviews, data, codes and scripts available on an open-access basis. Out of this concept, PCI paved the way for researchers to regain control of their review and publishing system in an effort to increase transparency in the knowledge production chain.

The idea for the project emerged in 2016 following an examination of several failings in the science publishing system. Two major problems are the lack of open access for most publications, and the exorbitant publishing and subscription fees placed on institutions.

Even in France, where the movement for open science has been gaining momentum, half of publications are still protected by access rights. This means that they are not freely accessible to citizens, journalists, or any scientists affiliated with institutions that cannot afford to pay scientific journal subscriptions. These restrictions on the free circulation of scientific information are a hindrance to the sharing of scientific knowledge and ideas at large.

Moreover, the global turnover for the academic publishing industry in science, technology and medicine is estimated at US$10 billion for every 3 million articles published. This is a hefty sum, especially given that profit margins enjoyed by major publishing houses have averaged at 35-40% in recent years. Mindful of these costs and margins, the PCI founders wanted scientists and institutions to take back control of their own publishing. And so, in 2017, the Peer Community In initiative was born.

PCI sets up communities of scientists who publicly review and approve pre-prints in their respective fields, while applying the same methods as those used for conventional scientific journals. Under this peer-review system, editors (known as recommenders) carry out one or more review rounds before deciding whether to reject or approve the preprint submitted to the PCI. Unlike virtually all traditional journals, if an article is approved, the editor must write a recommendation outlining its content and merits.

This recommendation is then published along with all other elements involved in the editorial process (including reviews, editorial decisions, authors responses, etc.) on the site of the PCI responsible for organising the preprint review. This level of transparency is what makes PCI unique within the current academic publishing system.

Lastly, the authors upload the finalised, approved and recommended version of the article free of charge and on an open access basis to the preprint server or open archive.

PCI is making traditional journal publication obsolete. Due to its de facto peer-reviewed status, the finalised, recommended version of the preprint is already suitable for citation. In France, PCI-recommended preprints are recognised by several leading institutions, review committees and recruitment panels at the National Centre for Scientific Research (CNRS). At the Europe-wide level, the reviewed preprints are recognised by the European Commission and funding agencies such as the Bill and Melinda Gates Foundation and the Wellcome Trust.

PCI is also unique in its ability to separate peer review from publishing, given that approved and recommended preprints can still be submitted by authors for publication in scientific journals. Many journals even advertise themselves as PCI-friendly, meaning that when they receive submissions of PCI-recommended preprints, they take into account the reviews already completed by PCI in order to speed up their editorial decision-making.

This initiative was originally intended exclusively for PCIs to review and recommend preprints, but authors were sometimes frustrated to only see their recommended preprint on dedicated servers (despite being reviewed and recommended, preprints are still poorly indexed and not always recognised as genuine articles) or having to submit it for publication in a journal at the risk of being subjected to another round of review. However, since the creation of Peer Community Journal, scientists now have access to direct, unrestricted publishing of articles recommended by disciplinary PCIs.

Peer Community Journal is a diamond journal, meaning one that publishes articles with no fees charged to authors or readers. All content can be read free of charge without a pay-wall or other access restrictions. Designed as a general journal, Peer Community Journal currently comprises 16 sections (corresponding to the PCIs in operation) and is able to publish any preprint recommended by a disciplinary PCI.

Currently there are 16 disciplinary PCIs (including PCI Evolutionary Biology, PCI Ecology, PCI Neuroscience and PCI Registered Reports) and several more are on the way. Together, they boast 1,900 editors, 130 members in the editorial committees and more than 4,000 scientists-users overall. PCI and Peer Community Journal are recognised by 130 institutions worldwide, half of which (including the University of Perpignan Via Domitia) support the initiative financially. The number of French academics who are familiar with and/or who use PCI varies greatly between scientific communities. The percentage is very high among communities with a dedicated PCI (e.g., the ecology or evolutionary biology communities, with PCI Ecology and PCI Evol Biol, wherein an estimated half of scientists are now familiar with the system), but remains low among those without one.

To date, more than 600 articles have been reviewed through the system. Biology maintains a significant lead, but more and more fields are popping up, including archaeology and movement sciences. There is still plenty of scope for growth, in terms of greater investment from those familiar with the system and the creation of new PCIs by scientists from fields not yet represented by the current communities.

Other open-science initiatives have been set up across the globe, but none have quite managed to emulate the PCI model. Mostly limited to offers of peer-reviewed preprints (often directly or indirectly requiring a fee), these initiatives, such as Review Commons and PreReview, do not involve an editorial decision-making process and are therefore unable to effect change within the current publishing system.

While the PCI model is undeniably growing and now garners more than 10,000 unique visitors per month across all PCI websites, the creation of Peer Community Journal shows that the traditional academic publishing system is still intact. And it will doubtless endure into the near future, even though the preprint approval offered will hopefully become a sustainable model due to its cost-effectiveness and transparency across the board.

In the meantime, PCI and Peer Community Journal present a viable alternative for publishing diamond open access articles that are completely free of charge for authors and readers. In these changing times of unbridled, unjustifiable inflation placed on subscription and publishing prices, numerous institutions and universities are backing the rise of these diamond journals. PCI and Peer Community Journal embrace this dynamic by empowering all willing scientific communities to become agents of their own review and publishing process.

When science and society nurture each other, we reap the benefits of their mutual dialogue. Research can draw from citizens own contributions, improve their lives and even inform public decision-making. This is what we aim to show in the articles published in our series Science and Society, A New Dialogue, which is supported by the French Ministry of Higher Education and Research.

Denis Bourguet, Inrae; Etienne Rouzies, Universit de Perpignan, and Thomas Guillemaud, Inrae

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Denis Bourguet is co-founder of Peer Community In and Peer Community Journal and president of the Peer Community In association.

Thomas Guillemaud is co-founder and works on the operation of Peer Community In and Peer Community Journal. Peer Community In has received over 100 funding from public bodies including the Ministry of Higher Education and Research, numerous universities and research organisations since 2016.

Etienne Rouzies does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

INRAE provides funding as a founding partner of The Conversation FR.

Universit de Perpignan provides funding as a member of The Conversation FR.

View post:

Could 'Peer Community In' be the revolution in scientific publishing ... - Gavi, the Vaccine Alliance

Categories
Dedicated Server

Soap Star Dies: Rita McLaughlin Walter, Carol on As the World Turns – Soaps.com

Rita Walter (ne McLaughlin) passed away on Christmas.

On December 26, David McLaughlin, the brother of former As the World Turns star Rita Walter, reported that she had died a day earlier, on the anniversary of her exit from the soap in 1981 following an 11-year run as Carol Deming Hughes Stallings Andropoulos Frazier.

My sweet, loving sister Rita was called to Jesus on His birthday,read his message. She was a well-known actress as well as dedicated server of the Lord Jesus.

May you rest in peace, sis. I love you forever, he added. Youve taken my heart with you.

Walter, who has three children with her reverend husband, got her big break in the 1960s when she landed the uncredited role of Patty Dukes double on The Patty Duke Show. Six years later, the budding soap star made her daytime debut as Wendy Phillips on The Secret Storm.

But it was the role of As the World Turns plucky Carol that really put Walter on the map. Introduced to the Oakdale scene as a college student working for much-married Lisa, the heroine tied the knot with her boss son, Tom, only to have second husband Jay Stallings cheat on her with her exs subsequent wife, Natalie!

Thats Walter on the right, third row down.

Credit: CBS/Courtesy of the Everett Collection

Later, Carol, who seemed to attract calamities like flames do moths, wound up adopting Natalies daughter with Jay, who was killed in a construction accident from which he couldve been saved by colleague Steve Andropoulos, who, needless to say, became her next husband. Following their split Carol wouldnt put her daughters neck on the line so that Steve could conduct shady business with James Stenbeck art imitated life when she married a reverend and left town.

Since retiring from showbiz, Walter, 71 when she passed away, had been working as an optician. On this somber occasion, pay your respects to the other soap alumni weve lost in 2022 via the below photo gallery.

See more here:

Soap Star Dies: Rita McLaughlin Walter, Carol on As the World Turns - Soaps.com

Categories
Dedicated Server

Combining convolutional neural networks and self-attention for … – Nature.com

Figure 4

The overall architecture of MBSaNet.

MBSaNet is proposed to improve the performance of classification models on the task of automatic recognition of multilabel fundus diseases. The main idea of MBSaNet is based on the explicit combination of convolutional layers and SA layers, which enables the model to have both the generalization ability of CNN and the global feature modeling ability of Transformer18,43. Previous studies have demonstrated that the local prior of the convolutional layer makes it good for extracting local features from fundus images; however, we believe that long-term dependences and the global receptive field are also essential for fundus disease identification, because even an experienced ophthalmologist is unable to make an accurate diagnosis from a small part of a fundus image (e.g., using only a macula). Considering that the SA layer with global modeling ability can capture long-term dependencies, MBSaNet is implemented by adopting a building strategy similar to the CoAtNet18 architecture with vertically stacked convolutional blocks and self-attention modules. The overall framework of MBSaNet is shown in Figure4, and Table 7 shows the size of the input and output feature maps at each stage of the model. The framework comprises two parts. The first of which is a feature extractor with five stages: Stage0Stage4, where Stage0 is our proposed multiscale feature fusion stem (MFFS), Stage1Stage3 are all convolutional layers, and Stage 4 is an SA layer with relative position representations. The second part is a multilabel classifier that predicts the sample category based on the features extracted from the above structure. We use the MBConv block that includes residual connections and an SE block27 as basic building blocks in all convolutional stages due to the same reverse bottleneck design as the Feedforward Network (FFN) block of Transformers. Unlike the regular MBConv block, MBSaNet replaces the max-pooling layers in the shortcut branch with convolutional layers having stride 2 in the downsampling strategy. This is a custom neural network that needs to be implemented by training it from scratch.

The dataset obtained from the International Competition on Ocular Disease Intelligent Recognition sponsored by Peking University. This dataset contains real patient data collected from different hospitals and medical centers in China, which were jointly launched by the Nankai University School of Computer Science-Beijing Shanggong Medical Information Technology Co., Ltd. joint laboratory. The training set is a structured ophthalmology database that includes the ages of 3,500 patients, color fundus images of their left and right eyes, and diagnostic keywords from clinicians. The test set includes off-site test set and on-site test set, but as with the training set, the number of samples under each category is unbalanced. Therefore, we also constructed a balanced test set with 50 images per class by randomly sampling a total of 400 images from the training set. The specific details of the dataset can be found in Table8. Fundus images were recorded by various cameras, including Canon, Zeiss, and Kowa, with variable image resolutions. As illustrated in Figure5(a), these data categorize patients into eight categories: normal (N), DR (D), glaucoma (G), cataract (C), AMD (A), hypertension (H), Myopia (M), and other diseases/abnormalities (O). There are two points to note. First, a patient may contain one or more labels, as shown in Figure 5(b), that is, the task is a multidisease multilabel image classification task. Second, as shown in Figure5(c), the class labeled Other Diseases/Abnormalities (O) contains images related to more than 10 different diseases, and low quality images due to factors such as lens blemishes, and invisible optic discs, variability is largely expanded in. All the methods developed and experiments were carried out in accordance with the relevant guidelines and regulations associated to this publicly available dataset.

Accuracy is the proportion of correctly classified samples to the total samples, which is the most basic evaluation indicator in classification problems. Precision refers to the probability that the true label of a sample is positive among all samples predicted to be positive. Recall refers to the probability of being predicted by the model to be a positive sample among all the samples with positive labels, and given the specificity of the task, we use a micro-average of precision and recall for each category in our experiments. AUC is the area under the ROC curve, and the closer the value is to 1, the better the classification performance of the model. AUC is often used to measure model stability. The Kappa coefficient is another index calculated based on the confusion matrix, which is used to measure the classification accuracy of the model and can also be used for consistency testing, where p0 denotes the sum of the diagonal elements divided by the sum of the entire matrix elements, i.e., accuracy. pe denotes the sum of the products of the actual and predicted numbers corresponding to all categories, divided by the square of the total number of samples. F1(_)score, also known as BalancedScore, is the harmonic (weighted) average of precision and recall, and given the category imbalance in the dataset, we use micro-averaging to calculate metrics globally by counting the total true positives,false negatives and false positives. The closer the value is to 1, the better the classification performance of the model. Final(_)score is the average of F1(_)score, Kappa, and AUC.

$$begin{aligned} Accuracy= & {} frac{TP+TN}{TP+FP+TN+FN} end{aligned}$$

(1)

$$begin{aligned} Precision= & {} frac{TP}{TP+FP} end{aligned}$$

(2)

$$begin{aligned} Recall= & {} frac{TP}{TP+FN} end{aligned}$$

(3)

$$begin{aligned} F1_score= & {} frac{2Precision*Recall}{Precision+Recall} end{aligned}$$

(4)

$$begin{aligned} Kappa= & {} frac{p_0 - p_e}{1 - p_e} end{aligned}$$

(5)

$$begin{aligned} Final_score= & {} frac{F1_score + Kappa + AUC}{3} end{aligned}$$

(6)

The fundus image dataset contains some low-quality images, which are removed since it would not be helpful for training. In order to minimize the unnecessary interference to the feature extraction process due to the extra noise brought by the black area of the fundus images, the redundant black area is cropped. We use the OpenCV library to load the image as a pixel vector and use the edge position coordinates of the retinal region of the fundus image to remove the black edges. The fundus images are further resized to a 224224 image size after being cropped as shown in Figure 6. Data augmentation is the artificial generation of different versions of a real dataset to increase its data size; the images after data augmentation are shown in Figure7. Because it is necessary to expand the size of the dataset based on retaining the main features of the original image, we use operations such as random rotation by 90(^circ ), adjustment of contrast, and center cropping. Finally, the global histogram equalization operation is performed on the original and enhanced images, so that the contrast of the images is higher and the gray value distribution is more uniform.

Processing of original training image.

The predictive ability of a classifier is closely related to its ability to extract high-quality features. In the field of fundus multidisease identification, owing to the different characteristics of the lesions reflected in the fundus images of several common eye diseases, the lesion areas have the characteristics of different sizes and distributions. We propose a feature fusion module with convolution kernels of different sizes to extract multiscale primary features of images in the input stage of the network and fuse them in the channel dimension. Feature extractors with convolution kernel sizes of 3(times )3, 5(times )5, 7(times )7, and 9(times )9 are used, since the convolution stride is set to 2, we padding the input image before performing each convolution operation to ensure that the output feature maps are the same size. By employing convolution kernels with different receptive fields in the horizontal direction to broaden the stem structure, more locally or globally biased features are extracted from the original images. The batch normalization operation and ReLU activation are then performed separately and the resulting feature maps are concatenated. The experimental results show that by widening the stem structure in the horizontal direction, higher quality low-level image features can be obtained at the primary stage.

CNNs have been the dominant structure for many CV tasks. Traditionally, regular convolutional blocks, such as ResNet blocks5, are well-known in large-scale convolutional networks; meanwhile, depthwise convolutions44 can be expressed as Formula7 and are popular on mobile platforms due to their lower computation cost and smaller parameter size. Recent studies have shown that an improved inverse residual bottleneck block (MBConv)32,45 which is built on depthwise separable convolutions can achieve both high accuracy and efficiency7. Inspired by the CoAtNet18 framework, we consider the connection between the MBConv block and FFN module in the Transformer (both adopt the inverted bottleneck design: first expand the feature map to 4(times ) the size of the input channel, and after the depth separable convolutions operation, project the 4(times ) wide feature map back to the original channel size to satisfy the residual connection), and mainly adopt the improved MBConv block including the residual connection and SE27 block as the convolution building block. The convolution operation with a convolution kernel size of 2(times )2 and a stride of 2, implements the output feature map size on the shortcut branch to match the output size of the residual branch. The experimental results show that this slightly improves the performance. The convolutional building blocks we use are shown in Figure8, and the downsampling implementation can be expressed as Formula8.

$$begin{aligned} y_i = sum _{jin {mathcal {L}} (i)}^{} w_{i-j} odot x_j quad quad {(mathrm depthwisequad mathrm convolution)} end{aligned}$$

(7)

where (x_i,y_i in {R}^{D}) denote the input and output at position i, respectively, and ({mathcal {L}} (i) ) denotes a local neighborhood of i, e.g., a 3(times )3 grid centered at i in image processing.

$$begin{aligned} mathrm {xlongleftarrow Norm(Conv(x,stride=2))+Conv(DepthConv(Conv(Norm(x),stride=2)))} end{aligned}$$

(8)

In natural language processing and speech understanding, the Transformer design, which includes a crucial component of the SA module, has been widely used. SA extends the receptive field to all spatial places and computes weights based on the re-normalized pairwise similarity between the pair ((x_i,x_j)), as shown in Formula9, where ({mathcal {G}}) indicates the global spatial space. Stand-alone SA networks33 have shown that diverse CV tasks may be performed satisfactorily using SA modules alone, albeit with some practical limitations, in early research. After pretraining on the large-scale JFT dataset, ViT11 applied the vanilla Transformer to ImageNet classification and produced outstanding results. However, with insufficient training data, ViT still trails well behind SOTA CNNs. This is mainly because typical Transformer architectures lack the translation equivalence18 of CNNs, which increases the generalization on small datasets46. Therefore, we decided to adopt a method similar to CoAtNet; the global static convolution kernel is summed with the adaptive attention matrix before softmax normalization, which can be expressed as Formula10, where (i,j) denotes any position pair and (w_{i-j}) denotes the corresponding convolution weights, improve the generalization ability of the network based on the Transformer architecture by introducing the inductive bias of the CNNs.

$$begin{aligned} y_{i}= & {} sum _{j in {mathcal {G}}} underbrace{frac{exp left( x_{i}^{top } x_{j}right) }{sum _{k in {mathcal {G}}} exp left( x_{i}^{top } x_{k}right) }}_{A_{i, j}} x_{j} end{aligned}$$

(9)

$$begin{aligned} y_{i}^{text{ pre } }= & {} sum _{j in {mathcal {G}}} frac{exp left( x_{i}^{top } x_{j}+w_{i-j}right) }{sum _{k in {mathcal {G}}} exp left( x_{i}^{top } x_{k}+w_{i-k}right) } x_{j} end{aligned}$$

(10)

The receptive field size is one of the most critical differences between SA and convolutional modules. In general, a larger receptive field provides more contextual information, but this usually results in higher model capacity. The global receptive field has been a key motivation for employing SA mechanisms in vision. However, a larger receptive field requires more computation. For global attention, the complexity is quadratic w.r.t. spatial size. Therefore, in the process of designing the feature extraction backbone, considering the huge computational overhead brought by the Transformer structure and the small amount of training data for practical tasks, we use more convolution blocks, and only set up two layers of SA modules in Stage4 in the feature extraction stage. Experimental results show that this achieves a good balance between generalization performance and feature modeling ability.

Convolutional building blocks.

The fundus disease recognition task is a multilabel classification problem, so it is unsuitable for training models with traditional loss functions. We refer to the loss function used in work16,40, all classified images can be represented as (X = ){(x_1,x_2...x_i...x_N)} , where (x_i) is related to the ground truth label (y_i), and (i = 1...N), N represents the number of samples. We wish to find a classification function (F:Xlongrightarrow Y) that minimizes the loss function L, we use N sets of labeled training data ((x_i,y_i)), and apply a one-hot method to each (y_i) is encoded, (y_i = [y_i^1,y_i^2...y_i^8] ), each y contains 8 values, corresponding to the 8 categories in the dataset. We draw on the traditional multilabel classification method based on problem transformation, and transformed the multilabel classification problem into a two-class classification problem for each label. The final loss is the average of the loss values of the samples corresponding to each label. After studying weighted loss functions, such as sample balance and class balance, we decided to use weighted binary cross-entropy from Formula11 as the loss function, where W = (1,1.2,1.5,1.5,1.5,1.5,1.5,1.2) denotes the loss weight. The positive class is 1, and the negative class is 0. (p(y_i)) is the probability that sample i is predicted to be positive.

$$begin{aligned} L=-frac{1}{N} sum _{i=1}^{N} W left(y_{i} log left( pleft( y_{i}right) right) +left( 1-y_{i}right) log left( 1-pleft( y_{i}right) right) right) end{aligned}$$

(11)

After obtaining the loss function, we need to choose an appropriate optimization function to optimize the learning parameters. Different optimizers have different effects on parameter training, so we mainly consider the effects of SGD and Adam on model performance. We performed multiple comparison experiments under the same conditions. The results showed that Adam significantly outperformed SGD in terms of convergence and shortened training time, possibly because when we chose SGD as the optimizer, the gradients of the samples were updated at every epoch, which brings additional noise. Each iteration is not in the direction of the global optimum, so it can only converge to the local optimum, decreasing accuracy.

Read this article:

Combining convolutional neural networks and self-attention for ... - Nature.com

Categories
Dedicated Server

From a $10,000 Ring to a Pokmon Charizard Card: MrBeast Gifts Minecraft Players Whatever They Build on the Server – EssentiallySports

YouTube star MrBeast has been known to indulge the community in his challenge videos. Recently he invited 100 Minecraft players to build whatever they want and vouched that hell buy the thing they build. Interestingly, the money range of the build was from $10 to $50,000, which made the players brainstorm the things they would want. Plus, it should be better and more impressive than the others.

ADVERTISEMENT

Article continues below this ad

MrBeast has been seen involving many games in his videos like Fortnite, Minecraft, and Among Us. He has done many challenges on the gaming servers and is back with another one where players got a full gaming PC to diamond rings.

ADVERTISEMENT

Article continues below this ad

Starting off with the $10 section, many players left the server while some took their creativity to do justice to the range. However, a player builds a Feastables bar and straight out won the deal with MrBeast of getting what they build. Next up was the $50 section, in which there was a close competition between a hamster and a Mario build. But the hamster stole the deal at last by the scores of MrBeast and his crew.

Unfortunately, no build under the $100 range won as it failed to impress the scorers. Next up for the $500 build, they had builds like a plane ticket, a Meta Quest 2 VR headset, and $500 worth of ice cream. Ultimately, the plane ticket won, which facilitated its maker to visit his family.

For the $1000 build, close competition occurred between a Pac-Man arcade and a telescope. After the results from the scorers, the Pac-Man build copped the deal. The video became even more wholesome when the scorers got to choose between an engagement ring and a drum set under the $5000 range. MrBeast asked if the player will propose to his girlfriend if he gets him an engagement ring. After getting a yes from the player, the engagement ring build won. Moreover, with the courtesy of NordVPN, MrBeast agreed to buy a whole gaming PC for a player.

The $10,000 Jeep Renegade build made the player get it in real life, which was dedicated to his wife. After Karl asked how it feels to be married to a nerd, the wife was immensely happy with it. To utter surprise, another player won a wedding ring under the $10,000 build so that they could propose to their boyfriend.

ADVERTISEMENT

Article continues below this ad

Also, a Pokmon Charizard card build won above all in the $15,000 section. Meanwhile, a car build was awarded the crown for the $25,000 build range.

At last, the scorers went on to the $50,000 section. Perplexed by the builds of a car, a waffle maker, and even an empty slot, they got their winner. A players unique thought of building a throne of cash made him win the $50,000 section, leaving the scorers intrigued by it.

ADVERTISEMENT

Article continues below this ad

WATCH THIS STORY:A Swedish Entrepreneur Once Donated $1.2 Million Just to Meet MrBeast

With that, another MrBeast came to an end. Jimmy did mention that one who subscribes to his channel could get a chance to win a golden toilet and might get a feature in his video. What do you think will be his next video? Do let us know in the comments below.

See original here:

From a $10,000 Ring to a Pokmon Charizard Card: MrBeast Gifts Minecraft Players Whatever They Build on the Server - EssentiallySports

Categories
Dedicated Server

What is an SSL certificate, why is it important and how to get one? – Android Authority

Joe Hindy / Android Authority

Have you ever noticed the padlock symbol in your web browsers address bar? Most websites, including the one youre reading this article on, use SSL certificates to establish a secure connection. The padlock icon offers a visual indication that the website has a valid SSL certificate installed. It also signals that any information you enter on the website is fully encrypted in transit. In other words, nobody can eavesdrop on your connection and steal sensitive data like your password or credit card details.

But what exactly are SSL certificates, how do they work, and can anyone get one? Heres everything you need to know.

See also: What is encryption?

What is an SSL certificate and how does it work?

Calvin Wankhede / Android Authority

An SSL certificate is a digital certificate issued by a trusted authority used for HTTPS or secure connections on the internet. A properly signed certificate provides a few key pieces of information that help your computer identify the identity of a website. It typically includes the name of the certificate owner, a unique serial number, an expiration date, and the digital signature of the issuing Certificate Authority (CA).

When you visit a website, your browser will automatically initiate a handshake process that checks for a valid SSL certificate. This process involves exchanging the SSL certificate and cryptographic keys, both of which cannot be spoofed.

SSL certificates aren't just symbolic, they also help keep your passwords safe from prying eyes.

If the details shared by the web server correspond to a valid certificate issued by a trusted authority, your browser will display a padlock symbol in the address bar. It will then initiate a secure connection, ensuring that data sent back and forth is completely encrypted. In a nutshell, the server and web browser use the pieces of information they know about each other to generate a cryptographic key at each end. And since nobody else has access to these details, they wont have the key to decrypt your communications.

If your web browser claims that the website youre trying to access is insecure, chances are that its because of an invalid or expired SSL certificate. This can happen if the website owner forgets to renew their certificate, but if it happens on every single website, you should also check your system date and time. However, it could also mean that the website isnt trustworthy so double-check that youve entered the correct address. Without an encrypted connection, you shouldnt enter any sensitive information like passwords as your browser will send it in unencrypted plain-text.

Related: Can your ISP see your browsing history? Heres what you need to know

If youre a website owner, getting an SSL certificate should be your top priority. This is especially true if you collect personal information or even user input in general. SSL certificates help ensure that a hacker cant intercept any data sent back and forth, so theres also privacy at stake.

Most web browsers these days, including Google Chrome, warn users if they visit a non-HTTPS website, which will likely cause them to click away. Search engines like Google also rank websites with SSL enabled higher so youre incentivized to install a certificate.

If you dont run a web or mail server, however, you dont need an SSL certificate. As long as you have a modern, up-to-date web browser, its the websites responsibility to ensure a secure connection.

Related: The best encrypted private messenger apps

Calvin Wankhede / Android Authority

Users get this warning if a website doesn't have an SSL certificateinstalled.

If you do need an SSL certificate, dont worry getting one doesnt take too much effort. A certificate is essentially a file that lives on your web server, all you have to do is place it in the right location and ensure that your host provides it to visitors. While you can self-sign your own certificates, web browsers wont accept those as they lack the signature of a trusted authority.

You can self-sign your own digital certificate, but no web browser will accept it for secure connections.

The easiest way to get a valid SSL certificate is via your domain providers website. GoDaddy, for example, will provide a single-domain SSL certificate at a fee of $299.99 every three years. DigiCert, meanwhile, offers certificates starting at $268 per year. And if you want a certificate for cheaper, other providers like NameCheap will have you covered for as little as $11 a year.

You can also get a valid SSL certificate for free via Lets Encrypt, which works just fine for a personal website or even a small business. Lets Encrypt is a non-profit Certificate Authority (CA) that aims to make internet security and encryption more widely accessible. The only downside is that youll have to renew and reinstall your digital certificate every three months instead of every year or longer. That said, you can automate this process with a small bit of code running on your web server.

Why are digital certificates so expensive?

Calvin Wankhede / Android Authority

If youre wondering why the cost of a digital certificate varies so much, its because each one offers a different level of security and there are very few trusted authorities out there. Some CAs have humans manually review each domain before issuing a certificate. Naturally, this makes them inherently more trustworthy but also expensive. Premium SSL certificates may also display the name of the website owner in some web browsers (like Google Inc.), boosting the perceived legitimacy of the brand.

The price for a digital certificate can vary from $0 to hundreds of dollars, but for good reason.

For large businesses like banks where security matters above everything else, an SSL certificate is often a no-brainer. It also helps that many larger providers offer dedicated customer support and insurance in case something goes wrong.

Read more: What is a VPN, and why do you need one?

See original here:

What is an SSL certificate, why is it important and how to get one? - Android Authority