On Thursday, the Cybersecurity and Infrastructure Security Agency kicked off its inaugural tabletop exercise with over 50 AI experts across government and industry in a four hour drill to help understand and mitigate digital threats to artificial intelligence systems.
Led by the Joint Cyber Defense Collaborative, a consortium of public and private sector leaders, the tabletop exercise simulated a cybersecurity incident targeting an AI-enabled system. Participants practiced incident response efforts to mitigate the damage caused by the hypothetical attack, including information sharing and operation collaboration.
The exercise was broken into three modules and described a hypothetical scenario in which hackers were able to circumvent an internally customized AI defense agent in an organization’s email system. A set of the government participants were kept out of the first two modules and entered during the third to then simulate how industry participants would interact and collaborate with new entrants after an incident had occurred.
The mission of the tabletop exercise was to build awareness of how AI systems can present new vectors for cyberthreats to digital networks, examine current responses and construct information sharing priorities for the critical infrastructure operators, security vendors and other stakeholders.
“This exercise marks another step in our collective commitment to reducing the risks posed by AI. It also highlights the importance of developing and delivering AI products that are designed with security as the top priority,” said CISA Director Jen Easterly in a statement. “As the national coordinator for critical infrastructure security and resilience, we’re excited to work with our partners to build on this effort to help organizations secure their AI systems.”
The tabletop exercise’s outcomes will help inform a forthcoming playbook CISA and the JCDC are working on releasing at the end of 2024, which will offer support and guidance for AI-based cyberattack responses. The agency plans to conduct a second tabletop exercise to test the playbook after its release.
Amazon Web Services, Cisco, Cranium, HiddenLayer, IBM, Microsoft, NVIDIA, OpenAI, Palantir, Palo Alto Networks, Protect AI, Robust Intelligence, Scale AI, FBI, National Security Agency, Office of the Director for National Intelligence, Department of Defense, and Department of Justice were among the participating agencies and companies.
“At OpenAI, we firmly believe that security is a team sport. It thrives on collaboration and benefits immensely from transparency,” said Matt Knight, Head of Security at OpenAI, in prepared remarks. “We are proud to have taken part in the tabletop exercise with JCDC.AI and other security leaders — these collaborations benefit our efforts of safely developing and deploying AI technology.”
The tabletop security exercise follows larger Biden administration initiatives to harness AI’s myriad beneficial uses while mitigating negative outcomes. This was the focal theme of the White House Office of Science and Technology Policy’s AI Aspirations conference, also held on Thursday.
While AI Aspirations featured system demos and discussions on near-term and present use cases for AI and machine learning systems, leaders noted that AI can both help support modern cybersecurity posture and hinder it.
“For security here at home, AI will be essential to boost cybersecurity and to protect our critical infrastructure,” OSTP Director Arati Prabhakar said in opening remarks at the conference.
In the last few weeks, a company has set terms for a forthcoming initial public offering. A second company has filed preliminary plans. Another biopharma company that is still in its infancy has seen its stock rise since it went public a little over two weeks ago.
The fact that the stock market has risen by double-digits is a big help. The biopharma sector, however, has not done as well. The Nasdaq Biotechnology index has risen just 3 percent and the SPDR S&P Biotech ETF has risen 2.8 percent. Institutional Investor reported recently that a number hedge funds specializing on biopharma and life science stocks are in the negative through the first five month of 2024.
Actuate Therapeutics is still working on developing therapies for cancers with high impact and difficult to treat. Last week, the company set terms for a planned IPO. The biopharmaceuticals firm will offer 5.6 million common shares for between $8 and 10 per share. The company would raise approximately $50 million at the midpoint.
Currently, two venture-capital firms hold a majority of shares: Bios Partners, with 55 percent ownership, and Kairos Ventures, with 17 percent.
Alumis, an early-stage biopharmaceutical company that develops oral therapies for patients with immune-mediated disease, has filed its initial plans to go public. Three investment firms, including a hedge fund, are listed as each owning at least 5 percent of shares. However, their exact stakes aren’t included. These include entities affiliated with the hedge fund firm Baker Brothers Life Sciences.
Alumis announced its latest private fundraise just three months ago. The $259 million Series C financing was led by Foresite Capital, a new investor Samsara BioCapital, and venBio partners. The fundraise attracted a number of new investors including Cormorant Asset Management, a life sciences hedge fund.
Actuate and Alumis will likely be optimistic about the IPO given the success of Rapport Therapeutics’ recent offering. Rapport priced shares at $17 each on June 6, the midpoint of the planned range of $16 to $18. Since the shares started trading, they’ve risen by about 43 percent. The clinical-stage biopharmaceuticals firm is developing small-molecule drugs for patients suffering from central neurological system disorders.
According to a regulatory filing, entities affiliated with Cormorant held 6.62 percent of shares before the IPO. According to a press announcement at the time, Cormorant led $150 million Series B financing round in August 2023. Rapport announced that Raymond Kelleher had been appointed to Cormorant’s board of directors. He has been the managing director since July 2020. Rapport stated in its IPO filing that Kelleher intended to resign immediately from the board, before the registration statement took effect. The company said that the resignation “was not due to any disagreements with us or any issues relating to our policies, operations, or practices.”
Alumis was also backed by other hedge funds. Perceptive Advisors (a Citadel Company), Surveyor Capital, and Logos Capital all participated in the Series-B financing. The regulatory filing for the IPO did not list any of the three as a 5 per cent owner.
Stay up to date with the latest transportation news by receiving TTNews directly in your email. ]
The way in which shippers obtain transportation may seem arbitrary and lacking in imagination or thought from the carrier’s perspective. Bid and choose the lowest rate. In reality, they have become more sophisticated in their approach. The transportation procurement process of shippers is evolving rapidly, and they would like to see more carriers keep up.
Before the pandemic most shippers held an annual bid event around the same time each year. The first quarter is a popular period, as many industries are slow during this time. The timing of procurement has changed since the pandemic. Shippers now run bids whenever it makes sense. Shippers want to know how and when to engage carriers in the bidding process because the negotiating power between the buyer (shipper), and the seller (carrier), shifts every one to three year.
Two indicators are useful to gain a better understanding of this shift: the New Rate Differential (also known as the Spot Premium Ratio) and the New Rate Difference (also known as the New Rate Differential) were developed and used in our iQ analytics company.
The NRD is calculated as the ratio between the average newly established contract rate and the average rate being replaced. The shipper can track NRD over time to see the overall market movement.
The SPR is a ratio of average spot prices to average contract prices. It signals imbalances between supply and demand. Shippers are constantly gauging the direction of the market to ensure they time their bids correctly. Or, if shippers follow a regular schedule of bidding, they will know what to expect when they make their bid.
What to Purchase
When deciding what type of capacity they need, shippers face three challenges.
First, because transportation is a derived need, any forecast for truck volumes is based on forecasts from other departments in an organization such as marketing or sales. A transportation forecast can only be as good as its underlying products.
Second, transportation buyers forecast volumes at the lane level. The more disaggregated the forecast, the worse the forecast will be. A lane-by lane volume forecast also makes assumptions about the operational decisions that will take place throughout the year.
These two challenges explain why most shippers base their forecasts on their most recent experience.
In addition to the challenges of forecasting, shippers also have to decide whether or not to include a particular lane in their bid. Not all lanes in a shipper’s network are identical. They differ in terms of total expected volume, consistency and strategic importance. Shippers who are sophisticated conduct a segmentation to identify lanes which should be handled differently.
According to an analysis by DAT and the Massachusetts Institute of Technology Center for Transportation & Logistics, lanes that had 12 or fewer loads in the previous year are less than 50% likely to have any volume the following year. Even if volume is achieved, the likelihood that it will not be included in the routing guide increases to over 40%.
How to Purchase
Shippers can create a portfolio of relationships for procurement, including contract, dedicated and spot, once they have segmented the network.
In a dedicated relationship a shipper has control over the day-today use of assets, whether leased or owned. This is the “make” of the classic buy-make decision. It is ideal for lanes that have consistent, balanced volumes and fully utilize drivers and trucks.
The traditional way to do business is through contractual relationships. In this approach, the price is fixed but the volume and capacity offered by the shipper are not. On these lanes, annual contracts are best. They provide enough volume to the carrier so that they can rely on it and have a truck ready when needed. Standard thresholds are one load per week or once every two weeks.
When the carrier and price are determined during the tender, this is called a spot or dynamic relationship. The spot market is ideal for lanes that have low, irregular and/or sparse volume. Recently, shippers have established direct application programming API connections with carriers in order to create more dynamic relationships.
This “relationship-portfolio” engine can give a better understanding of truckload procurement.
During the pandemic shippers learned the hard way the cost of unproductive and inefficient lanes. The days of bidding annually and selecting the lowest rates have passed for most shippers. They are more deliberate about how, when and what they procure their truckload capacities. Imaginative carriers can help.
Chris Caplice is a Ph.D. and chief scientist for DAT Freight & Analytics. He is also a senior research scientist for the Massachusetts Institute of Technology Center for Transportation & Logistics. He is the co-director and founder of the MIT FreightLab initiative.
Japanese shipping company Mitsui O.S.K. Lines (MOL), the newest member of the Smart Freight Center (SFC), has joined the community.
Smart Freight Center is a non-profit international organization based in Amsterdam, Netherlands. Its mission is to reduce greenhouse gas emissions (GHGs) from freight transportation.
SFC provides guidelines for decarbonization of logistics by visualizing GHG emission and making recommendations for reducing transportation.
It also promotes collaboration between various organizations and associations involved in international logistics.
The goal is to achieve zero emissions by 2050 and keep the global temperature increase average below 1.5degC.
SFC recently signed a Memorandum of Understanding with Green Marine, an environmental certification program for North American and European maritime industry, to support the decarbonization.
The center welcomed Japanese shipping giant NYK to its ranks last month.
MOL promotes initiatives to address environmental and other issues as part of its decarbonization program. The company wants to accelerate this initiative throughout its value chain.
One of the newest projects of the company is installing wind propulsion on a total seven newbuilding bulk carrier and multi-purpose vessel. MOL Group now has nine Wind Challenger equipped vessels, bringing its total to eleven.
Musk assured shareholders that he would remain. (Chris J. Radcliffe/Bloomberg News)
Stay up to date with the latest transportation news by receiving TTNews directly in your email. ]
DETROIT – Tesla shareholders voted on June 13 to restore Elon Musk’s $44.9 billion record pay package, which was thrown out earlier this year by a Delaware court. This vote sent a strong message of confidence in the leadership of the electric car maker.
Musk may not receive the compensation in stock anytime soon, even if the vote is favorable. The package will likely remain in Delaware Chancery Court or Supreme Court for several months as Tesla attempts to overturn the Delaware Judge’s rejection.
Musk has expressed doubts about his future at Tesla this year. He wrote on X that he wants a 25% stake to prevent him from moving the artificial intelligence development to another company. He has said that a higher stake in the company is necessary to control AI.
Tesla has also struggled to maintain its sales and profit margins, as the demand for electric cars slows down worldwide.
Musk assured shareholders at the annual meeting of the company on June 13 in Austin Texas that he would stay, saying he couldn’t sell any shares in the compensation package until five years had passed.
The 2018 CEO Performance Award and our relocation to Texas were approved by a large majority of $TSLA‘s stockholders at yesterday’s Annual Shareholders’ Meeting.
We have submitted all the necessary filings for our conversion to a Texas corporation. We can confirm… — Tesla June 14th, 2024
“It is not cash and I cannot run away, nor would I like to,” he said.
The company has not yet announced the vote totals for Musk’s pay, but it said that shareholders approved Musk’s compensation package, which was originally approved by the board of directors and stockholders over six years ago.
Tesla valued the package last at $44.9 billion when it filed a regulatory filing in April. It was once worth up to $56 billion, but its value has decreased in line with Tesla’s stock which has fallen about 25% this year.
In January, Chancellor Kathaleen S. Jude McCormick found in a lawsuit filed by a shareholder that Musk controlled the Tesla board in 2018 when it ratified a package and that the board failed to inform the shareholders who approved the package that same year.
Tesla has said that it will appeal the decision, but asked shareholders at the annual meeting to reapprove this package.
A separate vote approved the move of the company’s legal residence to Texas in order to avoid the Delaware courts, where Tesla is registered under the corporate status.
Musk, jubilantly, told the crowd at Tesla’s headquarters in Austin and its large factory: “It’s amazing.” “I don’t think we’re just opening a chapter for Tesla. We’re starting a brand new book.”
Musk and Tesla did not win everything. Shareholders approved measures to reduce the terms of board members from three to one year and to reduce the required vote for shareholder proposals to simple majority.
Legal experts say that the issue of Musk’s pay will be decided in Delaware. This is largely because Musk’s lawyers have assured McCormick that they won’t try to move this case to Texas.
They differ on whether the new ratification will make it easier or harder for Tesla to get the package approved.
Charles Elson, retired professor and founder of corporate governance center at University of Delaware said he does not think the vote will affect McCormick’s decision, which was based on law.
Elson said that McCormick’s ruling made the 2018 compensation package into a gift for Musk. This would require unanimous shareholder approval, which is an impossible threshold. Elson said that the vote is interesting in terms of public perception, but “in [his] view, it does not affect (the ruling]”.
John Lawrence, an attorney with Baker Botts in Dallas who defends corporations from shareholder lawsuits, agreed that the vote does not end the legal dispute or automatically give Musk stock options. He says that it gives Tesla a powerful argument to overturn the ruling.
McCormick’s decision should be reversed by Musk and Tesla, who will argue that the shareholders were well informed before the last votes. Lawrence said that the plaintiff will argue the vote is not legally binding and has no effect.
He said that the vote was conducted under Delaware law, and that it should be reviewed by the judge.
“This shareholder vote sends a strong message that you have a group of shareholders who are well-informed,” he said. “The Delaware judge could still decide that this does not change anything about her prior ruling, and doesn’t force her to make a different ruling moving forward. But I think that it gives Tesla and Musk a lot of ammunition to try to convince her to revisit this.”
Lawrence said that if the ruling is upheld, Musk will likely appeal to the Delaware Supreme Court.
Many institutional investors have criticized Musk’s large payout. Some cited the company’s recent struggles. Analysts said that individual shareholder votes likely put Musk’s compensation over the top.
Tesla announced on June 14 that Musk’s compensation package was approved by shareholders with a vote of 1,760,780 650 to 528.908,419, or 77%.
Musk began to inform shareholders of new developments in the “Full Self-Driving System” after the results were announced. He has staked his company’s future in the development of robots, artificial intelligence and autonomous vehicles.
Musk said that “Full Self-Driving”, with its new versions, is improving and that its safety per mile was better than human drivers.
“This is going to work.” This is going to be the case. “Mark my words, it’s only a matter time,” he said.
Despite its name “Full Self-Driving”, it can’t drive by itself. The company says that human drivers must always be ready to intervene. Tesla’s “Full Self-Driving Hardware” went on sale in late 2015, and Musk used the name since then as the company collected data to teach the computers how to drive.
Musk promised in 2019 that a fleet autonomous robotaxis would be ready by 2020. He said that the cars will be autonomous in 2022. Musk stated in April of last years that the system would be ready by 2023.
Since 2021, Tesla is beta-testing the “Full Self Driving” with volunteer owners. Last year, U.S. safety regulators forced Tesla to recall the software when they found that it misbehaved at intersections and violated traffic laws.
Musk said that the company has made great progress with its Optimus robot. He said that two robots are currently working in the factory in Fremont, Calif. They take battery cells from a production line and place them in shipping containers.
Musk believes that despite terminating the team responsible for Tesla’s Supercharger network of electric vehicle charging stations, the company will deploy “more chargers that are actually working” this year than the rest. Musk said he expected to spend $500m on Superchargers in the second half of this year.
Trucking news and briefs for Wednesday, June 12, 2024:
‘Brake Safety Day’ inspection blitz hit 5,000 vehicles all told, but date remains undisclosed
Inspectors in 47 jurisdictions throughout the U.S., Canada and Mexico conducted nearly 5,000 inspections in one day as part of the Commercial Vehicle Safety Alliance’s Brake Safety Day inspection and enforcement event.
Each year, CVSA law enforcement jurisdictions are invited to participate in the one-day, unannounced brake-safety inspection initiative. On that day, CVSA-certified commercial motor vehicle inspectors conduct their routine roadside inspections with a focus on brake systems and components, and provide brake-related inspection and violation data to CVSA. The unannounced blitz happens every spring, but CVSA kept a tight lid on the day, at least this year, not including it in the press materials around this year’s results. The 2023 event was held April 19.
This year’s Brake Safety Day data found that of the 4,898 inspections conducted, 4,328 commercial motor vehicles did not have any brake-related out-of-service violations — 88.4% of the total number of vehicles inspected.
However, inspectors identified 570 (11.6%) commercial vehicles with brake-related critical inspection item vehicle violations. Those vehicles were immediately placed out-of-service until the critical violations could be properly addressed.
Inspectors identified 330 commercial motor vehicles with 20% brake violations, meaning 20% or more of the vehicle’s service brakes had an out-of-service condition resulting in a defective brake. That was the top Brake Safety Day violation, accounting for 57.9% of all brake-related out-of-service violations.
Inspectors found other brake violations on 256 (44.9%) of the commercial motor vehicles inspected. Examples of other brake violations include worn brake lines/hoses, broken brake drums, an inoperative tractor protection system, inoperative low-air warning device, air leaks, hydraulic fluid leaks, etc.
Seventy-three commercial motor vehicles had steering-related brake violations — 12.8% of all brake-related out-of-service violations.
This year, emphasis was placed on brake lining/pad health and safety — the same focus of the upcoming Brake Safety Week. Brake lining/pad issues may result in violations or out-of-service conditions and may affect a motor carrier’s safety rating. Inspectors found 108 power units and 66 towed units with lining/pad violations.
A total of 114 brake lining/pad violations were discovered on power units. The top brake lining/pad violation on power units was for contamination, with 48 violations. Seventy-one brake lining/pad violations were identified on towed units, with 23 of those for cracks/voids in the linings/pads — the top brake lining/pad violation on towed units.
Nine U.S. jurisdictions with performance-based brake testers (PBBT) utilized them during Brake Safety Day for a total of 88 PBBT inspections conducted on Brake Safety Day. Only four (4.5%) failed to meet the 43.5% minimum braking efficiency required and were placed out of service.
Ohio warns of delays as ‘super load’ moves across state
The 12th of nearly two dozen “super loads,” and first of four loads that exceed 900,000 pounds, will depart a dock site in Adams County, Ohio, east of Cincinnati, on Sunday, June 16. The convoy will head to New Albany to deliver the load to the site of the new Intel plant in Licking County.
This load, an air processor known as a cold box used in the silicon chip manufacturing process, measures approximately 23 feet tall, 20 feet wide, 280 feet long, and weighs 916,000 pounds.
The move is scheduled to take more than a week. It will make stops in West Portsmouth, Lucasville, Waverly, Chillicothe, Rickenbacker, Groveport, Pickerington and Pataskala, before being delivered on Tuesday, June 25. Complete route details can be found here.
These extra-large loads will have significant traffic impacts as they move, according to the Ohio DOT. Drivers are strongly encouraged to plan ahead and avoid the route while the load is moving.
Due to the size of the loads and slow speed of the convoy, moves will begin early to ensure each move can be completed during daylight hours.
Because of an anticipated increased interest from the public, large crowds are also expected along the route, leading to additional traffic delays.
Notifications will be made in advance of each load leaving the dock on the Ohio River near the village of Manchester in Adams County. Updates will be provided as each load moves north toward central Ohio.
The Federal Motor Carrier Safety Administration is seeking approval for an information collection request (ICR) to survey commercial vehicle drivers to understand their perceptions and behaviors regarding safety belt usage and road safety.
The agency said that “existing data on the usage of safety belts and perceptions related to road safety do not capture the diversity of different types of CMV drivers in a post-coronavirus disease 2019 national emergency landscape.”
The survey, FMCSA added, will help “in gauging emerging trends among this cohort and will inform future messaging and communication efforts targeting CMV drivers.”
FMCSA will ask U.S.-based, self-identified CMV drivers to participate in an online survey. The results are not intended to be disseminated to the public, and information gathered “will not be used for the purpose of substantially informing influential policy decisions,” the agency noted.
While the White House Office of Management and Budget (OMB) reviews FMCSA’s request, the agency asks the public for comment on whether the survey is necessary for the performance of FMCSA’s functions; ways for FMCSA to enhance the quality usefulness, and clarity of the collected information; and more. Comments can be filed beginning Thursday, June 13, at www.regulations.gov by searching Docket No. FMCSA-2024-0091.
Channel to Port of Baltimore fully reopened to original dimensions
The U.S. Army Corps of Engineers and U.S. Navy Supervisor of Salvage and Diving have worked to clear the channel to allow it to fully reopen at its original operating dimensions.U.S. Army Corps of Engineers, Baltimore District
The Fort McHenry Federal Channel leading to the Port of Baltimore has now been restored to its original operational dimensions of 700 feet wide and 50 feet deep for commercial maritime transit, the U.S. Army Corps of Engineers announced this week.
As a result, the Port of Baltimore is expected to return to full operations.
Since March 26, the U.S. Army Corps of Engineers and U.S. Navy Supervisor of Salvage and Diving worked to clear Key Bridge wreckage and move the cargo ship, the Dali, from the channel. Following the removal of wreckage at the 50-foot mud-line, a survey of the channel was performed June 10, certifying the riverbed as safe for transit. Surveying and removal of steel at and below the 50-foot mud-line will continue to ensure future dredging operations are not impacted.
The Unified Command safely moved the Dali on May 20 and widened the limited access channel to 400 feet May 21, permitting all pre-collapse, deep-draft commercial vessels to transit through the Port of Baltimore. Now, the fully operational channel enables the flexibility to regain two-way traffic and cancel the additional safety requirements that were implemented because of the reduced channel width.
Fully restoring the channel to its original width and depth involved the removal of about 50,000 tons of bridge wreckage from the Patapsco River.
“We’ve cleared the Fort McHenry Federal Channel for safe transit. USACE will maintain this critical waterway as we have for the last 107 years,” said Col. Estee Pinchasin, Baltimore District commander. “I cannot overstate how proud I am of our team. It was incredible seeing so many people from different parts of our government, from around our country and all over the world, come together in the Unified Command and accomplish so much in this amount of time,” about two months all told.
Stay up to date with the latest transportation news by receiving TTNews directly in your email. ]
After July 1, Tennessee will ban the installation of boots on any truck or a trailer that is clearly identified as a business vehicle, either with a commercial license plate number or a U.S. Department of Transportation Number.
“Individuals will… buy these booting devices, show up in parking lot and start putting boot on people’s car, and then [are] awaiting them to come out of a restaurant and tell them to pay $200, or something, to get this boot removed,” said Jack Johnson, a state senator who, along with state Rep. Jake McCalmon, drafted the “Modernization of Towing, Imprisonment and Oversight (MOTION Act)” which was signed into law Bill Lee signed the MOTION Act into law on May 28. Johnson said that towers with a lack of integrity should be “held accountable.”
Donna England, President and CEO of Tennessee Motor Truck Association, stated that predatory towing practices are also a concern for truckers. England gave input to the legislators who drafted the bill. She expressed gratitude that a law was now in place to help end predatory practices.
She said, “This legislation will not only benefit Tennesseans, but also truckers throughout the country.”
Tennessee-registered Towers have been able conduct business in Arkansas for many years.
Johnson said that towers who are “unscrupulous” should be “held accountable.” (Tennessee General Assembly).
England said that predatory towing was brought to her attention by Alabama Trucking Association CEO Mark Colson last fall, when he shared with her how one of his members from a trucking company in Memphis had been ensnared and towed away by an unscrupulous tow tower.
England heard from her members shortly after that about similar incidents in Memphis. Memphis is a hotspot for the problem. This has also become a problem in Arkansas and Mississippi.
“One member was tow from West Memphis, Ark. and charged $3.500. Another was tow from Mississippi using a medium-duty wrecker which damaged their vehicle and was charged $4.500. She said that she had received numerous reports of trucks being towed in various locations around Memphis, including truck stops and lots without signs indicating parking. “Our goal is to create a level playing field for all parties involved, not to harm reputable companies. We have several towing firms in our association that are great to work with.
Paul Burnett, Director of the Arkansas Towing and Recovery Board told Transport Topics a story about a recent case in which a Tennessee towing firm’s license to operate as a booting and towing company in Arkansas was suspended after a complaint by a Tennessee trucking firm about predatory practices. (The towing firm can appeal the suspension.
According to the new Tennessee law vehicle booting is only legal in commercial lots where a parking attendant with a license and proof of employment can be physically present. The attendant must also be able remove the boot within 45 mins after being contacted. Payments can be made by credit/debit card. Parking lot signs are required to warn that unpaid vehicles parked there could be booted or town. The law also requires that Tennessee vehicle owners be properly notified when their vehicle is towed or sold by a towing firm.
The state Department of Revenue has also been directed to create, by July 2025, a motor vehicle portal that is accessible by law enforcement, towing firms, vehicle owners, and lien holders, and contains all public notifications regarding unclaimed vehicle sales.
“Predatory towing firms that hold equipment and freight hostage with excessive, fraudulent and inflated invoices tarnishes the reputation of the entire sector.” Spear said that they have been exploiting the trucking industry far too long and we will no longer pay ransoms. “ATA’s federation state associations is ready to fight back against unscrupulous businesses that target our industry, by injecting more fairness and accountability into state and local laws pertaining t towing.”
England said, “I encourage everyone to reach out to me if these problems persist after July 1, 2020.”
This fact sheet collects the recommendations from Chapter 5: “Financial Regulatory Agencies” of the joint report from Governing for Impact (GFI) and the Center for American Progress, “Taking Further Agency Action on AI: How Agencies Can Deploy Existing Statutory Authorities To Regulate Artificial Intelligence.” The chapter notes how artificial intelligence (AI) is poised to affect every aspect of the U.S. economy and play a significant role in the U.S. financial system, leading financial regulators to take various steps to address the impact of AI on their areas of responsibility. The impacts of AI on consumers, banks, nonbank financial institutions, and the financial system’s stability are all concerns to be investigated and potentially addressed by regulators using numerous existing authorities. The goal of these recommendations is to provoke a generative discussion about the following proposals, rather than outline a definitive executive action agenda. This menu of potential recommendations demonstrates that there are more options for agencies to explore beyond their current work, and that agencies should immediately utilize existing authorities to address AI.
Bank Secrecy Act
Relevant agencies: Treasury Department, Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, National Credit Union Administration, Securities and Exchange Commission, Commodity Futures Trading Commission
Using this authority, the Federal Reserve, OCC, FDIC, SEC, and CFTC could consider the following actions:
Regulate how institutions’ customer identification and suspicious activity reporting programs use AI. As AI becomes more integrated into financial systems, it can help institutions monitor and analyze transactions for Bank Secrecy Act (BSA) compliance more effectively, detecting anomalies or patterns indicative of illicit activities. However, regulators must be cognizant of the harms of offloading such an important law enforcement task to AI systems and should outline best practices for implementing AI systems and require institutions to develop standards for how they use AI to automate anti-money laundering tasks.
Require banks to periodically review their BSA systems to ensure accuracy and explainability. Accurate and timely reports of suspicious activities must be balanced against financial privacy and the Financial Crimes Enforcement Network’s ability to review the reports it receives. Regulators must ensure the AI institutions’ BSA systems use is accurate and can explain why activities are suspicious and therefore flagged. Regulators should require institutions to periodically review their AI—perhaps by hiring outside reviewers—to ensure continued accuracy and explainability to expert and lay audiences. Examiners must be able to review source code and dataset acquisition protocols.
Gramm-Leach-Bliley Act: Disclosure of nonpublic personal information
Relevant agencies: Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, National Credit Union Administration, Securities and Exchange Commission, Commodity Futures Trading Commission, Consumer Financial Protection Bureau
The regulators should make further use of this authority to ensure resiliency against AI-designed cyber threats, including the following actions:
Require third-party AI audits for all institutions. AI audits should be required for all institutions. Larger institutions can bring this practice in-house, depending on the ecosystem that develops around AI audits. However, smaller financial institutions may lack the staff and funding for in-house expertise or AI red-teaming but still need to mitigate AI risk. Accordingly, small institutions should undergo AI security audits by qualified outside consultants to determine where vulnerabilities lie. These audits help identify and address any vulnerabilities in AI systems that might be exploited by cyber threats, thus enhancing overall cybersecurity measures. This includes risks that cybercriminals could use AI to impersonate clients such that institutions inadvertently release customer information erroneously, believing that they are interacting with their clients. Regulators should set out guidelines for appropriate conflict checks and firewall protocols for auditors.
Require red-teaming of AI for the largest institutions. AI red-teaming is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”1 The largest firms should already be utilizing red-teaming for their AI products. In addition, they should be running red team/blue team exercises, and the agencies should require the teams to incorporate AI into their efforts. Using AI can significantly increase the speed at which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.2 Firms must know how malicious actors can use AI to attack their infrastructure to defend against it effectively. Banks and other financial institutions must conduct AI red-teaming to fortify their cyber defenses and proactively identify vulnerabilities.
Require disclosure of annual resources on AI cybersecurity and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance, which is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in banking operations, the potential vulnerabilities and risks associated with cyber threats amplify significantly. By mandating such disclosures, stakeholders, including customers, regulators, and investors, gain valuable insights into a bank’s commitment to mitigating cyber risks through AI.
Equal Credit Opportunity Act
Relevant agency: Consumer Financial Protection Bureau
Using this authority, the CFPB could consider the following actions:
Require lenders to periodically review their lending systems to ensure explainability and that no new discriminatory activity applies. Research suggests that AI-based systems may result in lending decisions that have a disparate impact,3 which is a violation of the Equal Credit Opportunity Act (ECOA).4 The CFPB has already indicated in guidance that AI-based lending systems cannot be used when those systems “cannot provide the specific and accurate reasons for adverse actions.”5 Nevertheless, the CFPB should require lenders making lending decisions using AI to periodically review those systems—perhaps by hiring outside reviewers—to ensure explainability to expert and lay audiences and to confirm that discrimination does not inadvertently creep in as new data are used. Examiners must review source code and dataset acquisition protocols.
Prohibit lenders from using third-party credit scores and models developed with unexplainable AI. Many lenders use credit scores or other sources of information from third parties, which themselves may use AI to create those ratings.6 The CFPB should prohibit lenders from using unexplainable scores or models to avoid fair lending requirements and require all lenders subject to the ECOA to obtain information about the explainability of their third-party service providers’ AI.
Require lenders to employ staff with AI expertise. As described above, many lenders rely on third-party models for lending decisions. Given the pitfalls of algorithmic lending decisions, these firms must maintain diverse teams that include individuals with AI expertise to understand how such models operate and can introduce bias into firms’ lending decisions. These experts are necessary to identify and mitigate potential biases or unintended consequences of algorithmic decision-making. The 2023 executive order on AI required federal agencies to appoint chief artificial intelligence officers (CAIOs),7 whose duties were further outlined in the OMB M-24-10 AI guidance.8 The CFPB should follow that model to require firms to similarly designate a CAIO or designate an existing official to assume the duties of a CAIO.
Fair Credit Reporting Act
Relevant agency: Consumer Financial Protection Bureau
As it relates to AI, the CFPB should consider using this authority to take the following actions:
Require credit reporting agencies to describe whether and to what extent AI was involved in formulating reports and scores. Although the CFPB has issued guidance making clear that the ECOA requires lenders to make their AI systems explainable,9 it has yet to do the same with credit reporting agencies. Given that AI-based systems may result in the creation of credit scores that will result in a disparate impact, the CFPB should use its authority over credit reporting agencies to make clear that the AI used to generate credit scores should describe the extent to which AI was used and ensure the scores are explainable.
Require credit reporting agencies to periodically review their AI systems to ensure explainability and that no new discriminatory activity applies. Beyond simply requiring credit reporting agencies’ AI systems to be explainable to expert and lay audiences, the CFPB should also require the agencies to periodically review their systems to ensure continued explainability as new data are introduced. CFPB examiners must be able to review source code and dataset acquisition protocols.
Require credit reporting agencies to provide for human review of information that consumers contest as inaccurate. As part of the U.S.C. § 1681i “reasonable reinvestigation” mandate, credit reporting agencies should be required to have a human conduct the reinvestigation of AI systems’ determinations and inputs.10 Since AI-based systems may use black-box algorithms to determine credit scores or inputs that create credit scores, individually traceable data are required for adequate human review. As noted above, general explainability is important but would not be sufficient to allow human reviewers to correct potentially erroneous information under the Fair Credit Reporting Act (FCRA).
Given the preceding recommendation, require users of credit reports to inform consumers of their right to human review of inaccuracies in AI-generated reports in adverse action notices, per 15 U.S.C. § 1681(m)(4)(B).
Update model forms and disclosures to incorporate disclosure of AI usage. Given the CFPB’s mandate that credit reporting agencies and users of credit reports use model forms and disclosures, the CFPB should update those forms to include spaces for model form users to describe their AI usage.
Importantly, “consumer reports” under the FCRA include those that provide information used “in establishing the consumer’s eligibility for … employment purposes.”11 “Employment purposes” include the “purpose of evaluating a consumer for employment, promotion, reassignment or retention as an employee.”12 The CFPB should consider several policy changes to explicitly address electronic surveillance and automated management (ESAM) used by employers:
Require purveyors of workplace surveillance technologies to comply with the FCRA. As AI firms become increasingly used to mine data provided by employers, it is important that ESAM software companies be considered credit reporting agencies and comply with the corresponding restrictions. The CFPB should consider adding such companies to its list of credit reporting agencies13 and issue supervisory guidance explaining the circumstances under which ESAM companies act as credit reporting agencies and the corresponding responsibilities that they entail for ESAM companies and employers.
Ensure ESAM technologies used by employers comply with the FCRA. If the CFPB provides that these technology providers are credit reporting agencies, the CFPB must also make clear that users of their software comply with the FCRA. Accordingly, the CFPB should consider modifying its “Summary of Consumer Rights” to include information about employee FCRA rights concerning employers’ use of ESAM technologies.14 It should also consider modifying “Appendix E to Part 1022” to identify how employers furnishing employee data to ESAM technology companies and data brokers must ensure the accuracy of their furnished information.15
Community Reinvestment Act
Relevant agencies: Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation
The federal banking regulators should consider using their authority to:
Require banks to indicate whether they use AI to comply with Community Reinvestment Act (CRA) regulations and, if so, require those systems to be explainable. Given AI systems’ abilities to wade through mountains of information and identify the most profitable outcomes, banks may use them to game CRA regulations. For example, banks may use AI to help determine the most optimal assessment areas for profitability purposes. Regulators should require banks to disclose if they use AI to comply with the CRA or with regulations promulgated thereunder. In addition, these AI systems should be required to be explainable to expert and lay audiences to ensure that designated assessment areas are logical. Examiners must be able to review source code and dataset acquisition protocols.
Relevant agency: Consumer Financial Protection Bureau
Using this authority, the CFPB should consider the following actions:
Require financial institutions’ consumer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict consumer protection standards,periodically reviewing consumer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems must provide consumers with accurate information about their accounts, their firms’ policies and procedures, and the law. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customers’ money transfers or asset purchases—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and comply with firms’ policies. The CFPB must ensure that institutions’ consumer-facing AI systems are accurate in all respects and require, through rulemaking, periodic review of their systems to ensure accuracy.
Require AI red-teaming and red team/blue team exercises for the largest institutions. The CFPB’s unfair, deceptive, or abusive acts or practices (UDAAP) authority can be used to prohibit the inadvertent disclosure of consumers’ information at institutions not subject to the Gramm-Leach-Bliley Act.16 Nonbank consumer financial service providers hold a wealth of information about customers off of which malicious AI systems feed, and they may be liable for customer losses stemming from AI-enabled fraud.17 With AI red-teaming18 or red team/blue team exercises, the red team attempts to attack a company’s information technology infrastructure while the blue team defends against such hacks. The largest firms should already be utilizing AI red-teaming and red team/blue team exercises, but given that real-world attackers have AI at their disposal, the agencies should require this. Having teams use AI can significantly increase the speed with which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.19 Firms must understand how malicious actors can use AI to attack their infrastructure and defend against it. Institutions must conduct AI red-teaming and red team/blue team exercises leveraging AI to fortify their cyber defenses and proactively identify vulnerabilities.
Require third-party AI audits for all institutions. AI audits should be required by all institutions. Larger institutions can bring this practice in-house, depending on the ecosystem that develops around AI audits. However, smaller financial institutions may lack the staff and funding for in-house expertise or AI red-teaming or red team/blue team exercises20 but still need to mitigate AI risk. Accordingly, small institutions should be required to undergo AI security audits by outside consultants to determine where vulnerabilities lie. These audits help identify and address any vulnerabilities in AI systems that might be exploited by cyber threats, thus enhancing overall cybersecurity measures. The CFPB may require such audits because failure to do so while claiming accurate and secure systems is unfair. Regulators should set guidelines for appropriate conflict checks and firewall protocols for auditors.
Require disclosure of annual resources dedicated to cybersecurity and AI risk management and compliance. Requiring nonbank consumer financial service providers to disclose their annual resources dedicated to cybersecurity and AI risk management and compliance is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in financial institution operations,21 the potential vulnerabilities and risks associated with cyber threats amplify significantly. The CFPB could enact regulations mandating such resource disclosures for spending on cybersecurity and AI risk management and compliance. By mandating such disclosures, stakeholders, including customers, regulators, and investors, would gain valuable insights into the extent of an institution’s commitment to mitigating cyber risks through AI.
Federal Deposit Insurance Act, Federal Credit Union Act, and Bank Holding Company Act
Relevant agencies:Federal Reserve, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, National Credit Union Administration
Using these authorities, the Federal Reserve, FDIC, OCC, and NCUA should consider the following actions:
Require financial institutions’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict standards, and require those institutions to periodically review their customer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems provide customers with accurate information about their accounts, their firms’ policies and procedures, and the law. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customers’ money transfers or asset purchases—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and within firms’ policies. Regulators must ensure that institutions’ customer-facing AI systems are accurate and require periodic reviews of their systems to ensure accuracy.
Ensure banks’ capital structures can withstand sudden and deep withdrawals of customer deposits or losses from banks’ risk management processes. Banks’ corporate clients are likely to begin using AI systems for treasury management—including bank deposits—and there are likely to be only a small number of providers of such systems, given the large computing power necessary for effective AI.22 AI-based treasury management systems may automatically move all firms’ cash, simultaneously creating significant movements of cash between financial institutions in short periods of time that result in sudden and significant drops in customer deposits. Regulators must ensure that banks maintain sufficient shareholder capital and high-quality liquid assets that enable them to withstand such shifts without failing.
Require that AI systems that are parts of banks’ capital, investment, and other risk management models be explainable. Banks today use various systems to automate their capital management strategies, evaluate investment opportunities, and otherwise mitigate risk. They will inevitably use AI for these and other purposes that have significant effects on their profitability and stability. The banking agencies already review firms’ risk management practices regarding the various models they use, and regulators should do the same with AI. Specifically, all AI systems must be explainable to expert and lay audiences. Examiners must be allowed to review source code and dataset acquisition protocols.
Ensure firms may move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not develop their own systems in-house; instead, they will license software from a few competing nonfinancial institutions.23 Financial firms must be able to move between different and competing AI systems to avoid lock-in. Accordingly, regulators should make it a prerequisite for using AI that any system adopted from a third-party service provider allows for easy transition to a competing system upon the contract’s expiration. Regulators must ensure that there are many—for example, at least five—providers of AI software for banks that provide for base interoperability, so that not all institutions are using the same one or two pieces of software.
Require disclosure of annual resources dedicated to cybersecurity and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance, which is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in banking operations, the potential vulnerabilities and risks associated with cyber threats amplify significantly. By mandating such disclosures, stakeholders, including customers, regulators, and investors, gain valuable insights into the extent of a bank’s commitment to mitigating cyber risks through AI. Bank and credit union annual disclosures could provide these disclosures.
Dodd-Frank Act: Systemic risk designation
Relevant agency: Financial Stability Oversight Council
Using its financial market utilities (FMU) designation authority, the FSOC should consider the following actions in the event that major providers of AI services reach a level of systemic importance to warrant oversight under these authorities:
Designate major providers of AI services to financial institutions as systemically important if they reach an adoption level that creates vulnerability. It may appear incongruous at first glance to designate AI service providers as not only systemically important but also as systemically important FMUs. They do not facilitate payments, are not clearinghouses, do not provide for settlement of financial transactions, nor do they engage in significant financial transactions with counterparties. However, providers of AI services to the largest and most systemically important financial institutions could still meet the FSOC’s two determinations if they become so important to traders and market makers that, if the AI systems stop working for those firms, it “could create, or increase, the risk of significant liquidity or credit problems [in the markets].”24Consider, for example, that market makers such as investment banks use AI systems to facilitate trades. If those systems stop working or execute faulty trades, significant liquidity could be removed from the markets, causing asset prices to drop precipitously along with financial instability. Similar arguments may be made for brokers using AI to manage their funding needs: If AI systems stop working, those brokers could lose access to funding sources, causing them to collapse. And the same is potentially true for high-frequency traders using AI to manage their trades—as faulty AI systems could result in flash crashes. Accordingly, the FSOC should monitor which AI systems are relied on by significant players in the markets and consider designating them as systemically important if their failure could threaten the stability of the U.S. financial system.
Designate the cloud service providers to those firms designated as systemically important. AI systems rely on cloud service providers, such as Amazon Web Services or Microsoft Azure, to operate; thus, if these cloud providers fail, AI systems also fail.25 Indeed, AI programs run on cloud providers’ servers and require cloud providers’ computing power to conduct the large-scale language processing required for AI. To the extent that AI software is of systemic importance to the financial system and may pose systemic risks if it fails, the fact that AI software cannot operate without cloud providers means that cloud providers are also of systemic importance to the financial system and may pose systemic risks themselves. This is not a new idea; members of Congress and advocacy organizations have previously called for such designation.26 However, the rise of AI gives this proposal new urgency. Accordingly, once the FSOC identifies which AI systems are systemically important, it should determine the cloud providers on which they rely and consider designating them as systemically important.
Securities Exchange Act of 1934
Relevant agency: Securities and Exchange Commission
Using this authority, the SEC should consider the following actions:
Require that AI systems that are parts of brokers’ capital, investment, and other risk management models be explainable. Brokers use a variety of systems to automate their capital management strategies, evaluate investment opportunities, and mitigate risk. They will inevitably use AI for these and other purposes that significantly affect their profitability and stability. The SEC already regulates brokers’ risk management models,27 and it should do the same with AI. Specifically, all AI systems must be explainable to expert and lay audiences. The SEC should also ensure that it and FINRA’s examiners may review source code and dataset acquisition protocols.
Require brokers’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict investor protection standards, with those brokers periodically reviewing their customer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems must provide clients with accurate information about their accounts, their policies and procedures, and the law. In addition, as these AI systems are used for more than simply providing information—such as executing customer trades—it is critical that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and within firms’ policies. The SEC must ensure that brokers’ customer-facing AI systems undergo periodic review to ensure accuracy through third-party audits.
Require brokers using AI systems to make investment recommendations to ensure those systems are explainable and operate in clients’ best interests. There may come a day when AI systems are used to make investment recommendations. Before that occurs, the SEC must make clear that any AI systems used for that purpose must comply with existing rules that require investment recommendations to be in clients’ best interests.28 Among other things, AI systems must be explainable to expert and lay audiences. Brokers must also be able to explain why their recommendations are not provided based on conflicts of interest. Furthermore, the SEC should require brokers using AI to make investment recommendations to periodically review those systems and ensure that examiners may review source code and dataset acquisition protocols.
Require red-teaming of AI for exchanges, alternative trading systems, and clearinghouses. AI red-teaming is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”29 The largest firms should already be utilizing red teaming for their AI products. In addition, they should be running red team/blue team exercises, and the agencies should require the teams to incorporate AI into their efforts. Using AI can significantly increase the speed with which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.30 Firms must be aware of how malicious actors can use AI to attack their infrastructure to be able to defend against it. Banks and other financial institutions must conduct AI red-teaming to fortify their cyber defenses and proactively identify vulnerabilities. Given the systemic importance of these firms, the SEC should not allow third-party audits to suffice, but rather deploy multiple steps to ensure security and protection.
Ensure firms may move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not develop their own systems in-house; instead, they will license software from a few competing nonfinancial institutions.31 It will be imperative that financial firms be able to move between different and competing AI systems to avoid lock-in. Accordingly, the SEC should make it a prerequisite of using AI that any system adopted from a third-party service provider allows for easy transition to a competing system upon the contract’s expiration. The SEC could require that brokers, exchanges, alternative trading systems, and clearinghouses ensure that there are many—for example, at least five—providers of AI software that provide for base interoperability before entering contracts, so that not all institutions are using the same one or two pieces of software.
Require disclosure of annual resources dedicated to cybersecurity spending and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance for transparency and accountability. Given the escalating reliance on AI-driven technologies in financial services, the potential vulnerabilities and risks associated with cyber threats amplify significantly. The SEC should, accordingly, mandate brokers, exchanges, and clearinghouses to disclose their annual expenditures on cybersecurity and AI risk management and compliance. By mandating such disclosures, the SEC can gain valuable insights into the extent of a firm’s commitment to mitigating AI risk management.
Investment Advisers Act of 1940
Relevant agency: Securities and Exchange Commission
Using this authority, the SEC should consider the following actions:
Require that registered investment advisers’ (RIAs) AI systems used to make investment recommendations are explainable and operate in clients’ best interests. There may come a day when AI systems are used to make investment recommendations. Before that occurs, the SEC must make clear that any AI systems used for that purpose must comply with existing rules that require investment recommendations to be in clients’ best interests. Among other things, RIAs’ AI systems must be explainable to both expert and lay audiences and explain why their recommendations are not provided based on conflicts of interest. Furthermore, the SEC should require RIAs that use AI to make investment recommendations to periodically review those systems and ensure that examiners may review source code and dataset acquisition protocols.
Require RIAs’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict investor protection standards, with RIAs periodically reviewing their customer-facing AI systems to ensure accuracy and explainability. As institutions begin using AI chatbots to communicate with customers, these systems provide clients with accurate information about their accounts, their firms’ policies and procedures, and the law in a manner that is not misleading. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customer trades—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only legal transactions within firms’ policies. The SEC must ensure that RIAs’ customer-facing AI systems are accurate and require periodic reviews of their systems to ensure accuracy.
Ensure RIAs may move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not be developing their systems in-house; instead, they will license software from a small number of competing nonfinancial institutions.32 It is imperative that RIAs are able to move between different and competing AI systems to avoid lock-in. Accordingly, the SEC should make it a prerequisite for using AI that any system adopted from a third-party service provider allows for easy transition to a competing system upon the contract’s expiration. The SEC must require that RIAs ensure that there are many—for example, at least five—providers of AI software that provide for base interoperability before entering contracts, so that not all institutions are using the same one or two pieces of software.
Using myriad authorities under the Commodity Exchange Act, the CFTC should consider the following actions:
Require AI systems that are parts of futures commission merchants’, swap dealers’, or major swap participants’ capital, investment, or other risk management models to be explainable. Today, these entities use a variety of systems to automate their capital management strategies, evaluate investment opportunities, and mitigate risk. They will inevitably begin using AI for these and other purposes that significantly affect their profitability and stability. The CFTC should regulate its AI models and ensure that all AI systems are explainable to expert and lay audiences. The CFTC should also ensure that it and the National Futures Association’s examiners may review source code and dataset acquisition protocols.
Require futures commission merchants’ customer-facing AI systems to accurately respond to customer inquiries and execute transactions subject to strict investor protection standards. As institutions begin using AI chatbots to communicate with customers, these systems provide clients with accurate information about their accounts, their firms’ policies and procedures, and the law. In addition, as these AI systems begin to be used for more than simply providing information—such as executing customer trades—it is imperative that they accurately and effectively execute transactions according to customers’ wishes and execute only transactions that are legal and within firms’ policies. The CFTC must ensure that futures commission merchants’ customer-facing AI systems are accurate in all respects and require periodic reviews of those systems to ensure accuracy and explainability.
Require that FCMs’ AI systems used to make investment recommendations be explainable and operate in clients’ best interests. There may come a day when AI systems are used to make investment recommendations. Before that occurs, the CFTC must make clear that any AI systems used for that purpose must comply with existing rules that require investment recommendations to be in clients’ best interests. Among other things, AI systems must be explainable to expert and lay audiences and explain why recommendations are not provided based on conflicts of interest. Furthermore, the CFTC should require FCMs using AI to make investment recommendations, to periodically review those systems, and to ensure that examiners can review source code and dataset acquisition protocols.
Require red-teaming of AI for swap dealers, exchanges, and clearinghouses. AI red-teaming is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.”33 The largest firms should use red-teaming for their AI products. In addition, they should run red team/blue team exercises and require the teams to incorporate AI into their efforts. Using AI can significantly increase the speed with which red teams can find and exploit vulnerabilities, leaving blue teams at a significant disadvantage.34 Firms must be aware of how malicious actors can use AI to attack their infrastructure to be able to defend against it. Banks and other financial institutions must conduct AI red-teaming to fortify their cyber defenses and proactively identify vulnerabilities.
Require third-party AI audits for all institutions. All institutions should require AI audits. Larger institutions can bring this practice in-house, depending on the ecosystem that develops around AI audits. However, smaller financial institutions may lack the staff and funding for in-house expertise or AI red-teaming but still need to mitigate against AI risk. Accordingly, small institutions should be required to undergo AI security audits by outside consultants to determine where vulnerabilities lie. These audits help identify and address any vulnerabilities in AI systems that might be exploited by cyber threats, thus enhancing overall cybersecurity measures. Regulators should set out guidelines for appropriate conflict checks and firewall protocols for auditors.
Ensure firms can move between different AI systems before they contract for one system. The sheer amount of computing power involved in generative AI means that most financial institutions will not be developing their systems in-house; instead, they will license software from a few competing nonfinancial institutions.35 It is imperative that financial firms are able to move between different and competing AI systems to avoid lock-in. Accordingly, the CFTC should make it a prerequisite for using AI that any system adopted from a third-party service provider allows for an easy transition to a competing system upon the contract’s expiration. The CFTC must require that all registrants and registered entities ensure that there are many—for example, at least five—providers of AI software that provide for base interoperability before entering contracts, so that not all institutions use the same one or two pieces of software.
Require disclosure of annual resources dedicated to cybersecurity and AI risk management and compliance. Financial institutions must disclose their annual resources dedicated to cybersecurity and AI risk management and compliance, which is crucial for transparency and accountability. Given the escalating reliance on AI-driven technologies in financial services, the potential vulnerabilities and risks associated with cyber threats amplify significantly. Accordingly, the CFTC should mandate that registrants and registered entities disclose their annual expenditures on cybersecurity and AI risk management and compliance. By mandating such disclosures, the CFTC can gain valuable insights into the extent of a firm’s commitment to mitigating AI risks.
VentureSoul Partners, a Mumbai-based venture debt company, has announced the launch its first debt fund, VentureSoul Capital Fund I. The fund’s target corpus is up to Rs 600 billion. The firm was founded in 2003 by three former HSBC Bankers: AnuragTripathi, AshishGala, and KunalWadhwa.
Micro Labs is the anchor investor in this fund, a well known pharmaceutical company.
The fund has also received commitments from a number of prominent corporate executives. Among them are E Madhusudan from Kreditbee, Abhishek Khemka of Baazar Kolkata and Ponnuswami from Pure Chemicals. Glen Appliances Ltd. and PSN Group have also committed.
Investing at the Series A stage
Advertisment
VentureSoul Partners’ SEBI-registered Alternative Investment Fund Category II will invest in companies that have reached the Series A stage and beyond, as long as they have a viable revenue model.
The fund is not sector-specific, but will target startups in the fintech and B2C, as well as B2B and SaaS segments. According to the fund’s founders, it aims to combine traditional bank principles with modern credit assessment technology to offer tailored solutions for startups.
How will the fund affect the venture debt market
The launch of the fund coincides with a period of significant growth in India’s venture-debt ecosystem.
According to a Stride Ventures report, venture debt investments for Indian startups will increase by 50% by 2023, and reach $1.2 billion (9,945 crore rupees).
Around 175 companies raised venture debt in 300 rounds. Since 2017, the venture debt market in India has grown at a CAGR of 34%.
Growth capital for startups
VentureSoul Partners wants to distinguish itself by offering growth capital to startup companies while focusing on partnerships over the long term. The firm intends to provide innovative debt solutions tailored for the unique needs of businesses in the new economy.
VentureSoul Partners aims to support startups with customized financial solutions by integrating traditional banking methods and advanced credit evaluation technologies. This will contribute to the growth of the venture loan market in India.
The rate of inflation is easing in the developed world except for one area where prices are soaring: the cost of shipping goods on the high seas.
The Drewry World Container Index released on Thursday shows that spot rates for full-size container shipping to the US and Europe have risen again. Three key routes are all above $6,000 for a unit of 40 feet. The spot rates for full-size a data-ga-onclick=”Inarticle articleshow link click#Small Biz#href” href=”https://economictimes.indiatimes.com/topic/shipping” target=”_blank”>shipping/a> containers to the US and Europe from Asia rose again in the most recent data, with three key routes all topping $6,000 for equivalence 40 foot units.
Nearly six months of regular attacks against vessels in the
Red Sea
The industry responsible for about 80% of international goods traffic has been stretched to its capacity, causing disruptions and bottlenecks at some of Asia’s largest ports.
Singapore’s maritime hub, one of the world’s major crossroads for seaborne cargo, has been experiencing a prolonged period of congestion. According to industry estimates the waiting time to get a berth in Singapore is now nearing five full days. In Chinese ports Ningbo Shanghai and Qingdao, it is between one and four days.
Demand for goods, especially in the US, is strong. Imports in the Port of Los Angeles – the busiest seaport of the US – remained above pre-pandemic levels in the first five month of 2024, despite a slight decline in May.
Drewry reports that the cost to ship a 40-foot container from Shanghai to Los Angeles last week increased by 0.8%, to $6,025. This was the sixth consecutive week of gains. The charge from Shanghai to Rotterdam rose 2.4% to $6.177, the highest since September 2022.
Mediterranean Sea
Drewry reports that the rate has risen by 3%, to $6.862. This is the highest since September 2022. This was the highest rate since September 2022. Drewry said that it “expects freight rates from China to continue to increase next week due congestion issues in Asian ports.”