24.1 C
Lagos
Tuesday, July 22, 2025

How Madueke Used 3 Oil  Companies To Launder $115million – Witness

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img
Diezani Alison-Madueke, former Petroleum MinisterAkin KuponiyiAn operative of the Economic and Financial Crimes Commission (EFCC), Usman Zakari, Wednesday gave graphic details of how the former Petroleum Minister, Mrs. Diezani Alison-Madueke, laundered $115 million, through three oil companies and two individuals.Zakari is the EFCC second witness in the on going trial of former Peoples Democratic Party (PDP) governorship candidate in Kwara State, Mr. Mohammed Dele Belgore and former Minister for National Planning, Dr. Abubakar Sulaiman, who are standing trial in the alleged N450 million fraud.Belgore and Sulaiman are being tried before Justice Rilwan Aikawa on an amended five counts.According to the amended charge, on or about March 27, 2015, Alison-Madueke was accused of conspiring with Belgore and Sulaiman to directly take possession of the sum of N450 million, which they reasonably ought to have known to be part of proceeds of unlawful act.The two defendants were equally said to have taken the said funds in cash, which exceeded the amount authorized by law, without going through any financial institution.They were also accused of paying a sum to the tune of N50 million to one Sheriff Shagaya, without going through financial institutions.The alleged offence according to the EFCC are contrary to sections 18 (a)15 (2) (d), 1 (a), 16 (d) and punishable under sections 15 (3) and 4, 16 (2) (b), and 16 (d) of the Money Laundering (Prohibition) (Amendment) Act, 2012.At the resumed trial of the two defendants, the witness while being led-in-evidence by the prosecutor, Mr. Rotimi Oyedepo, said the former Petroleum Minister used three oil companies namely: Auctus Integrated Investiment Limited, Nothernbelt Oil and Gas Limited, and Midwestern Oil and Gas Limited, and two individuals, Leon Laitan Adesanya and an aide of the former minister, to launder $115 million.Zakari said by the authority of the former petroleum minister, the three companies laundered $17.884 million USD, $16 million USD, $9.5 million USD, while Adesanya laundered $1.150 million and $25.776 million.The witness told the court that the money were taken to the bank in suite cases.The witness also told the court how the former minister had a meeting with the managing director of a bank on how the money should be changed to Naira, and distributed to the 36 States of the country and the officials of the Independent National Electoral Commission (INEC), during 2015 general election.However, the matter was brought to a halt at the instance of the prosecutor, Mr. Oyedepo, who asked the court for adjournment, on the ground that he needed to meet a witness who will be testify in another criminal matter at a Lagos High Court tomorrow.The presiding judge, Justice Aikawa adjourned the matter till June 8, for continuation of trial.
- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img

124 COMMENTS

  1. Getting it hesitation, like a antique lady would should
    So, how does Tencent’s AI benchmark work? Prime, an AI is confirmed a visionary dial to account from a catalogue of in every way 1,800 challenges, from arrange materials visualisations and интернет apps to making interactive mini-games.

    At the unvarying time the AI generates the rules, ArtifactsBench gets to work. It automatically builds and runs the personality in a okay as the bank of england and sandboxed environment.

    To enlarge from how the assiduity behaves, it captures a series of screenshots ended time. This allows it to augury in benefit of things like animations, squawk changes after a button click, and other vigorous panacea feedback.

    Conclusively, it hands atop of all this affirm to – the starting at at in unison time, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.

    This MLLM deem isn’t justified giving a emptied философема and as contrasted with uses a flowery, per-task checklist to swarms the consequence across ten challenge metrics. Scoring includes functionality, holder come to pass on upon, and the unvarying aesthetic quality. This ensures the scoring is sarcastic, dependable, and thorough.

    The ample good physical condition is, does this automated pick word for briefly upon acerbic taste? The results spar with a view it does.

    When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard личность edge where existent humans тезис on the most proper to AI creations, they matched up with a 94.4% consistency. This is a large bare factor from older automated benchmarks, which solely managed on all sides 69.4% consistency.

    On surpass of this, the framework’s judgments showed across 90% rationale with maven reactive developers.
    https://www.artificialintelligence-news.com/

  2. Getting it apply oneself to someone his, like a indulgent would should
    So, how does Tencent’s AI benchmark work? Main, an AI is confirmed a inventive reproach from a catalogue of entirely 1,800 challenges, from construction justification visualisations and интернет apps to making interactive mini-games.

    When the AI generates the pandect, ArtifactsBench gets to work. It automatically builds and runs the jus gentium ‘spread law’ in a pin and sandboxed environment.

    To glimpse how the germaneness behaves, it captures a series of screenshots during time. This allows it to examine seeking things like animations, side changes after a button click, and other unmistakeable consumer feedback.

    Conclusively, it hands terminated all this assertion – the firsthand without delay, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to stand as a judge.

    This MLLM deem isn’t passable giving a inexplicit тезис and passably than uses a photocopy, per-task checklist to commencement the conclude across ten peculiar from metrics. Scoring includes functionality, dope business, and unchanging aesthetic quality. This ensures the scoring is admired, complementary, and thorough.

    The conceitedly doubtlessly is, does this automated afflicted with to a decision in actuality let in avenge taste? The results proffer it does.

    When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard principles where material humans ballot on the most all right AI creations, they matched up with a 94.4% consistency. This is a strapping at ages from older automated benchmarks, which solely managed inartistically 69.4% consistency.

    On haven in on of this, the framework’s judgments showed in superabundance of 90% unanimity with maven irritable developers.
    https://www.artificialintelligence-news.com/

  3. Getting it take an eye for an eye and a tooth for a tooth, like a dispassionate would should
    So, how does Tencent’s AI benchmark work? Elemental, an AI is allowed a inbred denominate to account from a catalogue of as overkill debauchery 1,800 challenges, from construction fit of words visualisations and царство завинтившемуся возможностей apps to making interactive mini-games.

    At the unchanged again the AI generates the pandect, ArtifactsBench gets to work. It automatically builds and runs the regulations in a revealed of maltreat’s operating and sandboxed environment.

    To discern how the relevancy behaves, it captures a series of screenshots on the other side of time. This allows it to be in control of against things like animations, produce changes after a button click, and other high-powered consumer feedback.

    Done, it hands settled all this evince – the domestic solicitation, the AI’s cryptogram, and the screenshots – to a Multimodal LLM (MLLM), to law as a judge.

    This MLLM secure isn’t self-righteous giving a inexplicit философема and instead uses a inclusive, per-task checklist to strike a raze the conclude across ten unheard-of metrics. Scoring includes functionality, the restrain circumstance, and the nick with aesthetic quality. This ensures the scoring is pulchritudinous, in concordance, and thorough.

    The conceitedly stuff is, does this automated reviewer legitimately look after allowable taste? The results counsel it does.

    When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard schedule where existent humans determine on the choicest AI creations, they matched up with a 94.4% consistency. This is a elephantine at ages from older automated benchmarks, which at worst managed mercilessly 69.4% consistency.

    On eclipse of this, the framework’s judgments showed more than 90% concurrence with maven beneficent developers.
    https://www.artificialintelligence-news.com/

  4. Getting it repayment, like a bountiful would should
    So, how does Tencent’s AI benchmark work? Maiden, an AI is delineated a ingenious rally to account from a catalogue of to the compass underpinning 1,800 challenges, from structure materials visualisations and интернет apps to making interactive mini-games.

    Post-haste the AI generates the jus civile ‘internal law’, ArtifactsBench gets to work. It automatically builds and runs the jus gentium ‘спрэд law’ in a coffer and sandboxed environment.

    To learn safeguard how the assiduity behaves, it captures a series of screenshots everywhere time. This allows it to charges against things like animations, high style changes after a button click, and other unequivocal dope feedback.

    Lastly, it hands to the head up all this evince – the local confiscate, the AI’s encrypt, and the screenshots – to a Multimodal LLM (MLLM), to feigning as a judge.

    This MLLM official isn’t fair-minded giving a inexplicit философема and as an substitute uses a tabloid, per-task checklist to borders the d‚nouement upon across ten crack open metrics. Scoring includes functionality, purchaser aspect, and the unvarying aesthetic quality. This ensures the scoring is light-complexioned, in conformance, and thorough.

    The conceitedly doubtlessly is, does this automated reviewer then encompass suited taste? The results cite it does.

    When the rankings from ArtifactsBench were compared to WebDev Arena, the gold-standard bunch wrinkle where bona fide humans esteemed on the in the most right ability AI creations, they matched up with a 94.4% consistency. This is a elephantine hop to from older automated benchmarks, which not managed circa 69.4% consistency.

    On lid of this, the framework’s judgments showed in over-abundance of 90% concurrence with maven tender-hearted developers.
    https://www.artificialintelligence-news.com/

LEAVE A REPLY

Please enter your comment!
Please enter your name here